Quantcast
Channel: NETRESEC Network Security Blog
Viewing all 160 articles
Browse latest View live

Analyzing Web Browsing Activity

$
0
0
NetworkMiner logo HTTP GET

One of the features included in the newly released version 2.0 ofNetworkMiner Professional is a new tab called “Browsers”. This tab shows web browsing requests and reponses in a hierarchical tree view, with the identified web browsers as root nodes.

The idea of tracking browser activity this way was suggested to me by Steffen Thorkildsen way back in 2009. I'm therefore happy to finally have this feature implemented in NetworkMiner!

At first glance, the Browser tab looks somewhat like the Hosts tab. One difference is that there can be multiple browsers per host, since each unique HTTP User-Agent is considered a separate browser.

NetworkMiner Professional 2.0 Browsers tab

The web pages (URLs) visited by a browser can be analyzed by expanding the node of that browser. The URLs are organized in a hierarchical structure, so that all URLs visited by clicking a link on a web page are placed under the node of that web page. This enables the analyst to see how a user ended up at a particular URL. NetworkMiner primarily uses the HTTP referer header (the misspelling of referrer stems back toRFC1945) to backtrack the pages visited before landing at a particular page.

NetworkMiner Professional 2.0 Browsers tab - Bing search
Image: Bing search for “create bitcoin address” that led the user to www.btcpedia.com

The browser tree view also shows HTTP redirects, such as “302 Found” and “301 moved permanently”. These redirects can be used in order to see encrypted HTTPS URLs that a user is redirected to, for example when logging in at a website.

NetworkMiner Professional 2.0 Browsers tab - 302 Moved Temporarily
Image: Microsoft responding with a “302 Moved Temporarily" redirect

The icons that show up at some web servers are favicon images that have been passively extracted from the analyzed PCAP file.

NetworkMiner Professional 2.0 Browsers tab - Favicon
Image: Website icons extracted from favicon.ico downloads

We hope the Browser tab can be of help in criminal investigations in order to show whether or not a suspect visited a particular website intentionally. This feature can also be used to track the activity of malware that uses HTTP for command-and-control (C2) as well as to analyze redirect chains used for malware downloads.

NetworkMiner Professional 2.0 Browsers tab - Redirect Chain
Image: PCAP file containing a redirect chain leading to malware downloads

The PCAP file loaded in the screenshot above originally comes frommalware-traffic-analysis.net. Note that our analysis was done by runningNetworkMiner in Linux to prevent accidental malware infection. The events shown in NetworkMiner's browser tab matches the description of the redirect chain provided at malware-traffic-analysis.net:

162.144.66.10 port 80 - www.crowdfundingformybusiness.com - Compromised website
185.14.30.37 port 80 - goog1eanalitics.pw - First redirect
178.32.173.105 port 80 - 178.32.173.105 - Second redirect
46.101.59.201 port 80 - osooraudie.co.vu - Nuclear EK

The redirect chain leads to a Nuclear Exploit Kit (SWF file with MD5 695a07cbcac3ca64010e168fe495ff4aVirusTotal). Later on the Nuclear EK retrieves the file “kernel1.exe”, which seems to be related to the Kelihos botnet.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Packet Injection Attacks in the Wild

$
0
0

I have previously blogged about packet injection attacks, such as theChinese DDoS of GitHub andCovert Man-on-the-Side Attacks. However, this time I've decided to share some intelligence on real-world packet injection attacks that have been running for several months and that are still active today.


Packet Injection by Network Operators

Gabi Nakibly, Jaime Schcolnik and Yossi Rubin recently released a very interesting research paper titled“Website-Targeted False Content Injection by Network Operators”, where they analyzed packet injection attacks in the wild. Here's a snippet from the paper's abstract:

It is known that some network operators inject false content into users’ network traffic. Yet all previous works that investigate this practice focus on edge ISPs (Internet Service Providers), namely, those that provide Internet access to end users. Edge ISPs that inject false content affect their customers only. However, in this work we show that not only edge ISPs may inject false content, but also core network operators. These operators can potentially alter the traffic of all Internet users who visit predetermined websites.

The researchers analyzed 1.4 petabits of HTTP traffic, captured at four different locations; three universities and one corporation. Some of their findings have been made available as anonymized PCAP files here:
http://www.cs.technion.ac.il/~gnakibly/TCPInjections/samples.zip

We have attempted to recreate these packet injections by visiting the same URLs again. Unfortunately most of our attempts didn't generate any injected responses, but we did manage to trigger injections for two of the groups listed by Nakibly et al. (“hao” and “GPWA”).


Redirect Race between hao.360.cn and hao123.com

We managed to get very reliable packet injections when visiting the website www.02995.com. We have decided to share one such PCAP file containing a packet injection attack here:
https://www.netresec.com/files/hao123-com_packet-injection.pcap

This is what it looks like when loading that PCAP file into CapLoader and doing a“Flow Transcript” on the first TCP session:

CapLoader Flow Transcript of race between hao.360.cn and hao123.com
Image: CapLoader Flow Transcript (looks a bit like Wireshark's Follow-TCP-Stream)

We can see in the screenshot above that the client requests http://www.02995.com/ and receives two different responses with the same sequence number (3820080905):

  • The first response is a “302 Found”, forwarding the client to:
    http://www.hao123.com/?tn=93803173_s_hao_pg
  • The second response is a “302 Moved Temporarily”, that attempts a redirect to:
    http://hao.360.cn/?src=lm&ls=n4a2f6f3a91

Judging from the IP Time-To-Live (TTL) values we assume that the first response (hao123.com) was an injected packet, while the second response (hao.360.cn) was coming from the real webserver for www.02995.com.

If you have an eye for details, then you might notice that the injected packet doesn't use the standard CR-LF (0x0d 0x0a) line breaks in the HTTP response. The injected packet only uses LF (0x0a) as line feed in the HTTP header.

Since the injected response arrived before the real response the client followed the injected redirect to www.hao123.com. This is what the browser showed after trying to load www.02995.com:

Browser showing www.hao123.com when trying to visit www.02995.com

SSL encryption is an effective protection against packet injection attacks. So if the user instead enters https://www.02995.com then the browser follows the real redirect to hao.360.cn

Browser showing hao.360.cn when using SSL to visit www.02995.com


id1.cn redirected to batit.aliyun.com

Prior to the release of Gabi's packet injection paper, the only publicly available PCAP file showing a real-world packet injection was this one:
https://github.com/fox-it/quantuminsert/blob/master/presentations/brocon2015/pcaps/id1.cn-inject.pcap

That PCAP file was released after Yun Zheng Hu (of Fox-IT) gave a presentation titled “Detecting Quantum Insert” at BroCon 2015. A video recording of Yun Zheng's talk is available online, including a live demo of the packet injection.

We have managed to re-trigger this packet injection attack as well, simply by visiting http://id1.cn. Doing so triggers two injected HTTP responses that attempts to do a redirect to http://batit.aliyun.com/alww.html. The target page of the injected responses has a message from the Alibaba Group (aliyun.com) saying that the page has been blocked.

Website blocked message from Alibaba Group

We have decided to also share a PCAP file containing a packet injection attack for id1.cn here:
https://www.netresec.com/files/id1-cn_packet-injection.pcap

This is what it looks like when that PCAP file is loaded into NetworkMiner Professional, and the Browsers tab is opened in order to analyze the various HTTP redirections:

Browsers tab in NeworkMiner Professional 2.0
Image: Browsers tab in NetworkMiner Professional 2.0

Here's a short recap of what is happening in our shared PCAP file for id1.cn:

  • Frame 13 : http://id1.cn is opened
  • Frame 18 : Real server responds with an HTML refresh leading to http://id1.cn/rd.s/Btc5n4unOP4UrIfE?url=http://id1.cn/
  • Frame 20 : The client also receives two injected packets trying to do a “403 Forbidden” that redirects to http://batit.aliyun.com/alww.html. However, these injected packets arrived too late.
  • Frame 24 : The client proceeds by loading http://id1.cn/rd.s/Btc5n4unOP4UrIfE?url=http://id1.cn/
  • Frame 25 : Two new injected responses are sent, this time successfully redirecting the client to the Alibaba page.
  • Frame 28 : The real response arrives too late.
  • Frame 43 : The client opens the Alibaba page with message about the site being blocked

Protecting against Packet Injection Attacks

The best way to protect against TCP packet injection attacks is to use SSL encryption. Relying on HTTP websites to do a redirect to an HTTPS url isn't enough, since that redirect could be targeted by packet injection. So make sure to actually type “https://” (or use a browser plug-in) in order to avoid being affected by injected TCP packets.


Referenced Capture Files

The following PCAP files have been referenced in this blog post:

For more PCAP files, please visit our list of publicly available PCAP files here:https://www.netresec.com/?page=PcapFiles

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Detecting Periodic Flows with CapLoader 1.4

$
0
0
CapLoader 1.4 logo

I am happy to announce a new release of our super-fast PCAP handling tool CapLoader! One of the new features in CapLoader makes it even easier to detect malicious network traffic without having to rely on blacklists, such as IDS signatures.

The new version of CapLoader includes new features such as:

  • Services Tab (more details below)
  • Input filter to limit number of parsed frames
  • Flow Transcript in Hosts and Services tabs
  • Keyword filtering
  • Full filtering capability for all tabs
  • Wireshark style coloring of flows, services and hosts

Services Tab

The biggest addition to version 1.4 of CapLoader is the Services tab, which presents a somewhat new way of aggregating the flows found in a PCAP file. Each row (or “service”) in the services tab represents a unique combination of <Client-IP, Server-IP, Server-port and Transport-protocol>. This means that if a single host makes multiple DNS requests to 8.8.8.8, then all those flows will be merged together as one row in the services tab.

CapLoader Services tab showing DNS requests to 8.8.8.8

This view makes it easy to see if a host is frequently accessing a particular network service. CapLoader even shows if the requests are made with regular intervals, in which case we measure the regularity and determine the most likely period between connections. The idea for measuring regularity comes from Sebastian Garcia'sStratosphere IPS, which can identify botnets by analyzing the periodicity of flows going to a C2 server.


Malware Example: Kovter.B

Here's what the Services tab looks like when loading 500 MB of PCAP files from a network where one of the hosts has been infected with malware (Win32/Kovter.B).

CapLoader service ordered on regularity

The services in the screenshot are sorted on the “Regularity” column, so that the most periodic ones are shown at the top. Services with a regularity value greater than 20 can be treated as periodic. In our case we see the top two services having a regularity of 36.9 with an estimated period of roughly 6h 2min. We can visualize the periodic behavior by opening the flows for those two services in a new instance if CapLoader. To do this, simply select the two services' rows, right-click the PCAP icon (in the top-right corner) and select “Open With > CapLoader 1.4.0.0”

CapLoader Flows tab with periodicly accessed service

As you can see in the flows tab, these services are accessed by the client on a regular interval of about 6h 2min. Doing a flow transcript of one such flow additionally reveals that the payload seems suspicious (not HTTP on TCP 80).

CapLoader transcript of Kovter.B C2 attempt (hex)
Image: Kovter.B malware trying to communicate with a C2 server

The Kovter malware failed to reach the C2 server in the attempt above, but there is a successful connection going to a C2 server at 12.25.99.131 every 3'rd hour (see service number 8 in the list of the most periodically accessed services). Here's a flow transcript of one such beacon:

CapLoader Transcript of Kovter.B C2 traffic
Image: Kovter.B malware talking to C2 server at 12.25.99.131


Legitimate Periodic Services

Seven out of the 10 most periodically accessed services are actually caused by the Kovter malware trying to reach various C2 servers. The three most periodically accessed services that aren't malicious are:

  • Service #3 is a legitimate Microsoft service (SeaPort connecting to toolbar.search.msn.com.akadns.net)
  • Service #5 is a mail client connecting to the local POP3 server every 30 minutes.
  • Service #6 is Microsoft-CryptoAPI updating its Certificate Revocation List from crl.microsoft.com every 5 hours.

Signature-Free Intrusion Detection

As shown in this blog post, analyzing the regularity of services is an efficient way of detecting C2 beacons without having to rely on IDS signatures. This method goes hand-in-hand with ourRinse-Repeat Intrusion Detection approach, which can be used to find malicous network traffic simply by ignoring traffic that seems “normal”.


Credits

Several bugs have been fixed in CapLoader 1.4, such as:

  • Support for frames with Captured Length > Real Lenght (Thanks to Dietrich Hasselhorn for finding this bug)
  • Delete key is no longer hijacked by the “Hide Selected Flows” button (Thanks to Dominik Andreansky for finding this bug).
  • CapLoader GUI now looks okay even with graphics are scaled through "custom sizing". Thanks to Roland Wagner for finding this.

Downloading CapLoader 1.4

The regularity and period detection is available in our free trial version of CapLoader. To try it out simply grab a copy here:
https://www.netresec.com/?page=CapLoader#trial (no registration needed)

All paying customers with an older version of CapLoader can grab a free update to version 1.4 at ourcustomer portal.


UPDATE June 2, 2016

We're happy to announce that it is now possible to detect Kovter's C2 communication with help of an IDS signature thanks toEdward Fjellskål. Edward shared his IDS signature "NT TROJAN Downloader/Malware/ClickFraud.Win32.Kovter Client CnC Traffic" on the Emerging-Sigs mailing list yesterday. We have worked with Edward on this and the signature has been verified on our Kovter C2 dataset.


UPDATE June 8, 2016

Edward Fjellskål's IDS signature "ET TROJAN Win32.Kovter Client CnC Traffic" has now been published as an Emerging Threats open rule with SID 2022861.

#alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"ET TROJAN Win32.Kovter Client CnC? Traffic"; flow:established,to_server; dsize:4<>256; content:!"HTTP"; content:"|00 00 00|"; offset:1; depth:3; pcre:"/^[\x11\x21-\x26\x41\x45\x70-\x79]/R"; content:!"|00 00|"; distance:0; byte_jump:1,0,from_beginning,post_offset 3; isdataat:!2,relative; pcre:!"/\x00$/"; reference:url,symantec.com/connect/blogs/kovter-malware-learns-poweliks-persistent-fileless-registry-update; classtype:trojan-activity; sid:2022861; rev:1;)
Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Bug Bounty PCAP T-shirts

$
0
0

As of today we officially launch the 'Netresec Bug Bounty Program'. Unfortunately we don't have the financial muscles of Microsoft, Facebook or Google, so instead of money we'll be giving away t-shirts.

PCAP or it didn't happen t-shirt
Image: PCAP or it didn't happen t-shirt

To be awarded with one of our 'PCAP or it didn't happen' t-shirts you will have to:

  • Be able to reliably crash the latest version of NetworkMiner or CapLoader, or at least make the tool misbehave in some exceptional way.
  • Send a PCAP file that can be used to trigger the bug to info[at]netresec.com.

Those who find bugs will also receive an honorable mention in our blog post covering the release of the new version containing the bug fix.

Additionally, submissions that play a key-role in mitigating high-severity security vulnerabilities or addressing very important bugs will be awarded with a free license of eitherNetworkMiner Professional or the full commercial version of CapLoader.

Happy BugBounty Hunting!

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

PacketCache lets you Go Back in Time

$
0
0
PacketCache logo

Have you ever wanted to go back in time to get a PCAP of something strange that just happened on a PC?
I sure have, many times, which is why we are now releasing a new tool called PacketCache. PacketCache maintains a hive of the most important and recent packets, so that they can be retrieved later on, if there is a need.

Network forensics and incident response is performed post-event, but requires that packet have already been captured during the event to be analyzed.Starting a network sniffer after a suspected intrusion might provide useful insight on what the intruders are up to, but it is much better to be able to go back in time to observe how they gained access to the network and what they did prior to being detected. Many companies and organizations combat this problem by setting up one or several solutions for centralized network packet capturing. These sniffers are typically installed at choke-points on the network, such as in-line with a firewall. However, this prevents the sniffers from capturing network traffic going between hosts on the same local network. Intruders can therefore often perform lateral movement on a compromised network without risk getting their steps captured by a packet sniffer.

Logo for Back to the Future series logo - public domain

USB broadband modem - Copyright Prolineserver 2010 (cc-by-sa-3.0) We're now trying to improve the situation for the defenders by releasing PacketCache, which is a free (Creative Commons licensed) Windows service that is designed to continuously monitor the network interfaces of a computer and store the captured packets in memory (RAM). PacketCache monitors all IPv4 interfaces, not just the one connected to the corporate network. This way traffic will be captured even on public WiFi networks and Internet connections provided through USB broadband modems (3G/4G).

By default PacketCache reserves 1% of a computer's total physical memory for storing packets. A computer with 4 GB of RAM will thereby allow up to 40 MB of packets to be kept in memory. This might not seem like much, but PacketCache relies on a clever technique that allows it to store only the most important packets. With this technique just 40 MB of storage can be enough to store several days worth of “important” packets.

The “clever technique” we refer to is actually a simple way of removing packets from TCP and UDP sessions as they get older. This way recent communication can be retained in full, while older data us truncated at the end (i.e. only the last packets are removed from a session).

PacketCache services in services.msc

To download PacketCache or learn more about this new tool, please visit the official PacketCache page:https://www.netresec.com/?page=PacketCache

PCAP or it didn't happen!

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Detect TCP content injection attacks with findject

$
0
0
findject logo

NSA's QUANTUM INSERT attack is probably the most well-known TCP packet injection attack due to the Snowden revelations regarding how GCHQ used this method to hack into Belgacom. However, the “Five Eyes” are not the only ones who perform this type of attack on the Internet. We now release a tool to help incident responders to find these types of packet injection attacks.

Photo by Jasper Bongertz at SharkFest EU 2016

I had the opportunity to attend and present at SharkFest Europe last week. My presentation, titled “Dissecting Man-on-the-Side Attacks”, showed how TCP packet injection attacks can be analyzed if they have been recorded in a packet capture. In my talk I used a python script called “finject.py”, which reads PCAP files to find TCP packets with duplicate sequence numbers but different content. This script has previously only been shared with vetted parties, but as of my SharkFest presentation findject is now freely available for everyone.

Findject is not the first tool made available to detect TCP content injection attacks. Other detection methods include Suricata's reassembly_overlap_different_data alert,Fox-IT's Bro policy to check for inconsistencies in the first packet with payload, David Stainton's HoneyBadger and Martin Bruse's qisniff. Even though these are all great solutions we found that some of them didn't detect all TCP content injection attacks while others gave too many false positives. We also wanted to have a tool that was fast, portable and simple to use. This led us to create our own TCP injection detection tool.

python findject.py /nsm/pcap/live/*
opening /nsm/pcap/live/ppp0.150922_192034.pcap - no injections
opening /nsm/pcap/live/ppp0.150923_081337.pcap
PACKET INJECTION 42.96.141.35:80-192.168.1.254:59320 SEQ : 402877220
FIRST :
'HTTP/1.1 403 Forbidden\r\nServer: Beaver\r\nCache-Control: no-cache\r\nContent-Type: text/html\r\nContent-Length: 594\r\nConnection: close\r\n\r\n<html>\n<head>\n<meta http-equiv="Content-Type" content="textml;charset=UTF-8" />\n <style>body{background-color:#FFFFFF}</style> \n<title>TestPage</title>\n <script language="javascript" type="text/javascript">\n window.onload = function () { \n document.getElementById("mainFrame").src= "http://batit.aliyun.com/alww.html"; \n }\n</script> \n</head>\n <body>\n <iframe style="width:860px; height:500px;position:absolute;margin-left:-430px;margin-top:-250px;top:50%;left:50%;" id="mainFrame" src="" frameborder="0" scrolling="no"></iframe>\n </body>\n </html>\n\n'
LAST :
'HTTP/1.1 200 OK\r\nContent-Type: text/html\r\nContent-Length: 87\r\nConnection: close\r\n\r\n<html><head><meta http-equiv="refresh" content="0; url=\'http://id1.cn/\'"></head></html>'

opening /nsm/pcap/live/ppp0.150923_115034.pcap - no injections
opening /nsm/pcap/live/ppp0.150924_071617.pcap - no injections

In the example execution above we can see that findject.py detected an injected TCP packet in the file ppp0.150923_081337.pcap, while the other analyzed files contained no injections. The application layer data of the two conflicting TCP segments are printed to standard output with a header indicating whether the segment was the FIRST or LAST one. To find out which segment is the real one and which was the injected one we need to open the PCAP file in either Wireshark, tshark or CapLoader.

tshark -r /nsm/pcap/live/ppp0.150923_083317.pcap -Y "ip.src eq 42.96.141.35 and tcp.port eq 59320" -T fields -e frame.number -e ip.src -e ip.dst -e tcp.seq -e tcp.len -e ip.id -e ip.ttl -o "tcp.relative_sequence_numbers: false"
14 42.96.141.35 192.168.1.254 402877219 0   0x00002e36 94
25 42.96.141.35 192.168.1.254 402877220 726 0x00000d05 70
27 42.96.141.35 192.168.1.254 402877220 726 0x00000d05 69
28 42.96.141.35 192.168.1.254 402877220 170 0x00002e3e 94

The tshark execution above reveals that three packets sent from the web server's IP (42.96.141.35) are carrying data and have the same sequence number (402877220). Packet 25 and 27 are actually identical, while packet 28 is smaller (170 bytes) and has a different payload. The first displayed frame in the tshark output above is the SYN+ACK packet from the TCP 3-way handshake.

So how can we determine which of packets 25, 27 and 28 are real verses injected? Look at the IP-ID and IP-TTL values! Frame 28 has IP-ID and TTL values in line with what we see in the TCP 3-way handshake (TTL=94, IP-ID=0x00002e3e), which implies that this packet is probably authentic. Frames 25 and 27 on the other hand deviate from what we would expect from the server, which tells us that these packets were likely injected (spoofed) into the TCP session through a “man-on-the-side” attack.

findject logo

To learn more about findject.py and download the tool, please visit:https://www.netresec.com/?page=findject

Example captures containing TCP content injection attacks can be found on ourPublicly Available PCAP Files page under the“Packet Injection Attacks / Man-on-the-Side Attacks” section.

You can also read our blog posts Covert Man-on-the-Side Attacks and Packet Injection Attacks in the Wild to learn more about TCP packet injection attacks.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Reading cached packets with Wireshark

$
0
0
Would you like to sniff packets that were sent/received some minutes, hours or even days ago in Wireshark? Can't afford to buy a time machine? Then your best chance is to install PacketCache, which allows you to read OLD packets with Wireshark.Wireshark reading from PacketCache

We recently released a free tool for keeping a cache of recently sent/received network traffic in Windows. The tool, called PacketCache, is actually a Windows service that saves a copy of recent packets in RAM. The cached packets can be read simply by connecting to a named pipe called “PacketCache”, for example by using a PowerShell script as shown on the PacketCache page.

After talking to some Wireshark core developers atSharkFest Europe last week we managed to get Wireshark to read packets from PacketCache's named pipe stream. However, you will need to use Wireshark 2.3 or later to properly read from a named pipe. Unfortunately version 2.3 isn't scheduled for release until next summer (2017), so until then you'll have to use one of the automated builds instead. I usually go for the latest WiresharkPortable build, since it doesn't require installation. You can download the portable version of Wireshark 2.3 here:
https://www.wireshark.org/download/automated/win32/

Look for a file called “WiresharkPortable_2.3.[something].paf.exe”.

Follow these steps in order to read packets captured by PacketCache:

  1. Make sure you have Wireshark 2.3.0 (or later)
  2. Start Wireshark with admin rights (right-click > “Run as administrator”)
  3. Run Wireshark as administrator
  4. Press: Capture > Options
  5. Click “Manage Interfaces...”
  6. Select the “Pipes” tab
  7. Press the “+” button to add a named pipe
  8. Name the pipe “\\.\pipe\PacketCache” and press ENTER to save it
  9. PacketCache pipe interface added in Wireshark
  10. Press “OK” in the Manage Interface window.
  11. Wireshark with a PacketCache pipe interface
  12. Press “Start” to read the packets from PacketCache
Wireshark reading from PacketCache

The status field in Wireshark will say “Live capture in progress”, which is somewhat true. Wireshark will be updating the GUI live as packets are read from PacketCache, but the packets displayed can be several hours or even days old depending on when they were captured by PacketCache. The “live” capture will stop once all packets have been read from the PacketCache.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

BlackNurse Denial of Service Attack

$
0
0

Remember the days back in the 90s when you could cripple someones Internet connection simply by issuing a few PING command like “ping -t [target]”? This type of attack was only successful if the victim was on a dial-up modem connection. However, it turns out that a similar form of ICMP flooding can still be used to perform a denial of service attack; even when the victim is on a gigabit network.

The 90's called and wanted their ICMP flood attack back

BlackNurse logo

Analysts atTDC-SOC-CERT (Security Operations Center of the Danish telecom operator TDC) noticed how a certain type of distributed denial-of-service (DDoS) attacks were more effective than others. The analysts found that a special type of ICMP flooding attack could disrupt the network throughput for some customers, even if the attack was just using a modest bandwidth (less than 20Mbit/s). It turned out that Destination Unreachable ICMP messages (ICMP type 3), such as “port unreachable” (code 3) was consuming significantly more resources on some firewalls compared to the more common ICMP Echo messages associated with the Ping command. The TDC team have dubbed this particular ICMP flooding attack method “BlackNurse”.

TDC's own report about BlackNurse says:

“The BlackNurse attack attracted our attention, because in our anti-DDoS solution we experienced that even though traffic speed and packets per second were very low, this attack could keep our customers' operations down. This even applied to customers with large internet uplinks and large enterprise firewalls in place.”

Cisco ASA firewalls is one product line that can be flooded using the BlackNurse attack. Cisco were informed about the BlackNurse attack in June this year, but they decided to not classify this vulnerability as a security issue. Because of this there is no CVE or other vulnerability number associated with BlackNurse.

Evaluation of BlackNurse Denial-of-Service Attacks

Members of the TDC-SOC-CERT set up a lab network to evaluate how effective ICMP type 3 attacks were compared to other ICMP flooding methods. In this setup they used hping3 to send ICMP floods like this:

  • ICMP net unreachable (ICMP type 3, code 0):
    hping3 --icmp -C 3 -K 0 --flood [target]
  • ICMP port unreachable (ICMP type 3, code 3) a.k.a. “BlackNurse”:
    hping3 --icmp -C 3 -K 3 --flood [target]
  • ICMP Echo (Ping):
    hping3 --icmp -C 8 -K 0 --flood [target]
  • ICMP Echo with code 3:
    hping3 --icmp -C 8 -K 3 --flood [target]

The tests showed that Cisco ASA devices used more CPU resources to process the destination unreachable flood attacks (type 3) compared to the ICMP Echo traffic. As a result of this the firewalls start dropping packets, which should otherwise have been forwarded by the firewall, when hit by a BlackNurse attack. When the packet drops become significant the customer behind the firewall basically drops off the internet.

The tests also showed that a single attacking machine running hping3 could, on its own, produce enough ICMP type 3 code 3 packets to consume pretty much all the firewall's resources. Members of the TDC-SOC-CERT shared a few PCAP files from their tests with me, so that their results could be verified. One set of these PCAP files contained only the attack traffic, where the first part was generated using the following command:

hping3 --icmp -C 3 -K 3 -i u200 [target]

The “-i u200” in the command above instructs hping3 to send one packet every 200 microseconds. This packet rate can be verified simply by reading the PCAP file with a command like this:

tshark -c 10 -r attack_record_00001.pcapng -T fields -e frame.time_relative -e frame.time_delta -e frame.len -e icmp.type -e icmp.code
0.000000000   0.000000000   72   3   3
0.000207000   0.000207000   72   3   3
0.000415000   0.000208000   72   3   3
0.000623000   0.000208000   72   3   3
0.000830000   0.000207000   72   3   3
0.001038000   0.000208000   72   3   3
0.001246000   0.000208000   72   3   3
0.001454000   0.000208000   72   3   3
0.001661000   0.000207000   72   3   3
0.001869000   0.000208000   72   3   3

The tshark output confirms that hping3 sent an ICMP type 3 code 3 (a.k.a. “port unreachable”) packet every 208 microseconds, which amounts to rougly 5000 packets per second (pps) or 2.7 Mbit/s. We can also use the capinfos tool from the wireshark/tshark suite to confirm the packet rate and bandwidth like this:

capinfos attack_record_00001.pcapng
Number of packets:   48 k
File size:           5000 kB
Data size:           3461 kB
Capture duration:    9.999656 seconds
First packet time:   2016-06-08 12:25:19.811508
Last packet time:    2016-06-08 12:25:29.811164
Data byte rate:      346 kBps
Data bit rate:       2769 kbps
Average packet size: 72.00 bytes
Average packet rate: 4808 packets/s

A few minutes later they upped the packet rate, by using the “--flood” argument, instead of the 200 microsecond inter-packet delay, like this:

hping3 --icmp -C 3 -K 3 --flood [target]
capinfos attack_record_00007.pcapng
Number of packets:   3037 k
File size:           315 MB
Data size:           218 MB
Capture duration:    9.999996 seconds
First packet time:   2016-06-08 12:26:19.811324
Last packet time:    2016-06-08 12:26:29.811320
Data byte rate:      21 MBps
Data bit rate:       174 Mbps
Average packet size: 72.00 bytes
Average packet rate: 303 kpackets/s

The capinfos output reveals that hping3 was able to push a whopping 303.000 packets per second (174 Mbit/s), which is way more than what is needed to overload a network device vulnerable to the BlackNurse attack. Unfortunately the PCAP files I got did not contain enough normal Internet background traffic to reliably measure the degradation of the throughput during the denial of service attack, so I had to resort to alternative methods. The approach I found most useful for detecting disruptions in the network traffic was to look at the roundtrip times of TCP packets over time.

BlackNurse RTT Wireshark

The graph above measures the time between a TCP data packet and the ACK response of that data segment (called “tcp.analysis.ack_rtt” in Wireshark). The graph shows that the round trip time only rippled a little due to the 5000 pps BlackNurse attack, but then skyrocketed as a result of the 303 kpps flood. This essentially means that “normal” traffic was was prevented from getting though the firewall until the 303 kpps ICMP flood was stopped. However, also notice that even a sustained attack of just 37 kpps (21 Mbit/s or 27 μs inter-packet delay) can be enough to take a gigabit firewall offline.

Detecting BlackNurse Attacks

TDC-SOC-CERT have released the following SNORT IDS rules for detecting the BlackNurse attack:

alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"TDC-SOC - Possible BlackNurse attack from external source "; itype:3; icode:3; detection_filter:track by_dst, count 250, seconds 1; reference:url, soc.tdc.dk/blacknurse/blacknurse.pdf; metadata:TDC-SOC-CERT,18032016; priority:3; sid:88000012; rev:1;)

alert icmp $HOME_NET any -> $EXTERNAL_NET any (msg:"TDC-SOC - Possible BlackNurse attack from internal source"; itype:3; icode:3; detection_filter:track by_dst, count 250, seconds 1; reference:url, soc.tdc.dk/blacknurse/blacknurse.pdf; metadata:TDC-SOC-CERT,18032016; priority:3; sid:88000013; rev:1;)

Protecting against BlackNurse Attacks

The recommendation from TDC is to deny ICMP type 3 messages sent to the WAN interface of Cisco ASA firewalls in order to prevent the BlackNurse attack. However, before doing so, please read the following excerpt from theCisco ASA 5500 Series Command Reference:

“We recommend that you grant permission for the ICMP unreachable message type (type 3). Denying ICMP unreachable messages disables ICMP Path MTU discovery, which can halt IPSec and PPTP traffic. See RFC 1195 and RFC 1435 for details about Path MTU Discovery.”

In order to allow Path MTU discovery to function you will need to allow at least ICMP type 3 code 4 packets (fragmentation needed) to be received by the firewall. Unfortunately filtering or rate-limiting on a Cisco ASA does not seem to have an affect against the BlackNurse attack, the CPU max out anyway. Our best recommendation for protecting a Cisco ASA firewall against the BlackNurse attack is therefore torate-limit incoming ICMP traffic on an upstream router.

Another alternative is to upgrade the Cisco ASA to a more high-end one with multiple CPU cores, since the BlackNurse attack seems to not be as effective on muti-core ASA's. A third mitigation option is to use a firewall from a different vendor than Cisco. However, please note that it's likely that other vendors also have products that are vulnerable to the BlackNurse attack.

To learn more about the BlackNurse attack, visitblacknurse.dk or download the fullBlackNurse report from TDC.

Update November 12, 2016

Devices verified by TDC to be vulnerable to the BlackNurse attack:

  • Cisco ASA 5505, 5506, 5515, 5525 and 5540 (default settings)
  • Cisco ASA 5550 (Legacy) and 5515-X (latest generation)
  • Cisco 897 router
  • Cisco 6500 router (with SUP2T and Netflow v9 on the inbound interface)
  • Fortigate 60c and 100D (even with drop ICMP on). See response from Fortinet.
  • Fortinet v5.4.1 (one CPU consumed)
  • Palo Alto (unless ICMP Flood DoS protection is activated). See advisory from Palo Alto.
  • SonicWall (if misconfigured)
  • Zyxel NWA3560-N (wireless attack from LAN Side)
  • Zyxel Zywall USG50

Update November 17, 2016

There seems to be some confusion/amusement/discussion going on regarding why this attack is called the “BlackNurse”. Also, googling “black nurse” might not be 100 percent safe-for-work, since you risk getting search results with inappropriate videos that have nothing to do with this attack.

The term “BlackNurse”, which has been used within the TDC SOC for some time to denote the “ICMP 3,3” attack, is actually referring to the two guys at the SOC who noticed how surprisingly effective this attack was. One of these guys is a former blacksmith and the other a nurse, which was why a college of theirs jokingly came up with the name “BlackNurse”. However, although it was first intended as a joke, the team decided to call the attack “BlackNurse” even when going public about it.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

NetworkMiner 2.1 Released

$
0
0
NetworkMiner 2.1 Logo

We are releasing a new version ofNetworkMiner today. The latest and greatest version of NetworkMiner is now 2.1.

Yay! /throws confetti in the air


Better Email Parsing

I have spent some time during 2016 talking to digital forensics experts at various law enforcement agencies. I learned that from time to time criminals still fail to use encryption when reading their email. This new release of NetworkMiner therefore comes with parsers for POP3 and IMAP as well as an improved SMTP parser. These are the de facto protocols used for sending and receiving emails, and have been so since way back in the 90’s.

Messages tab in NetworkMiner 2.1 showing extracted emails
Messages tab in NetworkMiner 2.1 showing extracted emails

Not only does NetworkMiner show the contents of emails within the tool, it also extracts all attachments to disk and even saves each email as an.eml file that can be opened in an external email reader in order to view it as the suspect would.

Extracted email Get_free_s.eml opened in Mozilla Thunderbird
Extracted email ”Get_free_s.eml” opened in Mozilla Thunderbird

Encapsulation Protocols

There are several protocols that can be used to provide logical separation of network traffic, in order to avoid using multiple physical networks to keep various network segments, security domains or users apart. Some of these techniques for logical separation rely on tagging or labeling, while others are tunneling the encapsulated traffic. Nevertheless, it’s all pretty much the same thing; an encapsulation protocol is used in order to wrap protocol X inside protocol Y, usually while adding some metadata in the process.

NetworkMiner has been able to parse the classic encapsulation protocols802.1Q,GRE and PPPoE since 2011, but we now see an increased use of protocols that provide logical separation for virtualization and cloud computing environments. We have therefore added parsers in NetworkMiner 2.1 forVXLAN and OpenFlow, which are the two most widely used protocols for logical separation of traffic in virtualized environments. We have also added decapsulation of MPLS and EoMPLS (Ethernet-over-MPLS) to NetworkMiner 2.1.

Encapsulation examples for MPLS GRE and VXLAN

The new release additionally comes with support for the SOCKS protocol, which is an old school encapsulation protocol used by administrators as well as hackers in order to bypass firewalls or provide anonymous Internet access. The SOCKS parser in NetworkMiner can even be used to read network traffic fromTor in cleartext before it enters the Tor network. However, in order to capture Tor’s SOCKS traffic you’ll have to sniff traffic from the Tor client’s localhost interface on TCP port 9150.

PacketCache Logo

PacketCache

NetworkMiner 2.1 can read packets directly from a localPacketCache service by clicking ”File > Read from PacketCache”. This eliminates the need to run a PowerShell script in order to dump a PCAP file with packets recently captured by PacketCache.

HTTP Partial Content / Range Requests

Byte serving is a feature in HTTP that makes it possible to retrieve only a segment of a file, rather than the complete file, by using the “Range” HTTP header. This feature is often used by BITS in order to download updates for Windows. But we have also seen malware use byte serving, for example malware droppers that attempt to download malicious payloads in a stealthy manner. See Ursnif andDridex for examples of malware that utilize this technique.

NetworkMiner has previously only reassembled the individual segments of a partial content download. But as of version 2.1 NetworkMiner has the ability to piece together the respective parts into a complete file.

SSL/TLS and X.509 Certificates

NetworkMiner has been able to extract X.509 certificates to disk for many years now, simply by opening a PCAP file with SSL traffic. However, in the 2.1 release we’ve added support for parsing out the SSL details also from FTP’s “AUTH TLS” (a.k.a explicit TLS or explicit SSL) and STARTTLS in SMTP.

NetworkMiner now also extracts details from the SSL handshake and X.509 certificate to the Parameters tab, such as the requested SNI hostname and the Subject CN from the certificate.

SSL and certificate information extracted by NetworkMiner from PCAP
SSL handshake details and certificate info passively extracted from captured HTTPS session to mega.co.nz

NetworkMiner Professional

The new features mentioned so far are all part of the free open source version of NetworkMiner. But we have also added a few additional features to theProfessional edition of NetworkMiner as part of the 2.1 release.

The “Browsers” tab of NetworkMiner Professional has been extended with a feature for tracking online ads and web trackers. We are using EasyList and EasyPrivacy from easylist.to in order to provide an up-to-date tracking of ads and trackers. HTTP requests related to ads are colored red, while web tracker requests are blue. These colors also apply to the Files tab and can be modified in the Settings menu (Tools > Settings).

NetworkMiner Professional 2.1 showing Advertisments (red) and Trackers (blue)
NetworkMiner Professional 2.1 showing advertisments (red) and Internet trackers (blue).

The reason why NetworkMiner Pro now tracks ads and trackers is because these types of requests can make up almost half of the HTTP requests that a normal user makes while surfing the web today. Doing forensics on network traffic from a suspect criminal can be a very time consuming task, we therefore hope that being able to differentiate between what traffic that is initiated by the user rather than being triggered by an online advertisement service or internet tracker can save time for investigators.

The RIPE database previously contained a bug that prevented NetworkMiner Professional from properly leveraging netname info from RIPE. This bug has now been fixed, so that the Host Details can be enriched with details from RIPE for IP addresses in Europe. To enable the RIPE database you’ll first have to download the raw data by clicking Tools > Download RIPE DB.

Host Details with RIPE netnameHost Details enriched with RIPE description and netname

We have also extended the exported details about the hosts in the CSV and XML files from NetworkMiner Professional and the command line toolNetworkMinerCLI. The exported information now contains details such as IP Time-to-Live and open ports.

Upgrading to version 2.1

Users who have purchased a license for NetworkMiner Professional 2.0 can download a free update to version 2.1 from ourcustomer portal.

Those who instead prefer to use the free and open source version can grab the latest version of NetworkMiner from the officialNetworkMiner page.

Credits

There are several persons I would like to thank for contributing with feature requests and bug reports that have been used to improve NetworkMiner. I would like to thank Dietrich Hasselhorn,Christian Reusch, Jasper Bongertz, Eddi Blenkers and Daniel Spiekermann for their feedback that have helped improve the SMB, SMB2 and HTTP parsers as well as implementing various encapsulation protocols. I would also like to thank several investigators at the Swedish, German and Dutch police as well as EUROPOL for the valuable feedback provided by them.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Network Forensics Training at TROOPERS 2017

$
0
0
Troopers logo with Network Forensics Training

I will come back to the awesome TROOPERS conference in Germany this spring to teach mytwo-day network forensics class on March 20-21.

The training will touch upon topics relevant for law enforcement as well as incident response, such as investigating a defacement, finding backdoors and dealing with a machine infected with real malware. We will also be carving lots of files, emails and other artifacts from the PCAP dataset as well as performRinse-Repeat Intrusion Detection in order to detect covert malicious traffic.

Day 1 - March 20, 2017

The first training day will focus on open source tools that can be used for doing network forensics. We will be using the Security Onion linux distro for this part, since it contains pretty much all the open source tools you need in order to do network forensics.

Day 2 - March 21, 2017

We will spend the second day mainly using NetworkMiner Professional andCapLoader, i.e. the commercial tools from Netresec. Each student will be provided with a free 6 month license for the latest version of NetworkMiner Professional (see our recent release of version 2.1) and CapLaoder. This is a unique chance to learn all the great features of these tools directly from the guy who develops them (me!).

NetworkMiner  CapLoader

The Venue

The Troopers conference and training will be held at thePrint Media Academy (PMA) in Heidelberg, Germany.

PMA Early Morning by Alex Hauk
Print Media Academy, image credit: Alex Hauk

Keeping the class small

The number of seats in the training will be limited in order to provide a high-quality interactive training. However, keep in mind that this means that the we might run out of seats for the network forensics class!

I would like to recommend those who wanna take the training to also attend theTroopers conference on March 22-24. The conference will have some great talks, like these ones:

However, my greatest takeaway from last year's Troopers was the awesome hallway track, i.e. all the great conversations I had with all the smart people who came to Troopers.

Please note that the tickets to the Troopers conference are also limited, and they seem to sell out quite early each year. So if you are planning to attend the network forensics training, then I recommend that you buy an “All Inclusive” ticket, which includes a two-day training and a conference ticket.

You can read more about the network forensics training at theTroopers website.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

10 Years of NetworkMiner

$
0
0

I released the first version of NetworkMiner on February 16, 2007, which is exactly 10 years ago today.

NetworkMiner 0.79 in Windows XP

One of the main uses of NetworkMiner today is to reassemble file transfers from PCAP files and save the extracted files to disk. However, as you can see in the screenshot above, the early versions of NetworkMiner didn’t even have a Files tab. In fact, the task that NetworkMiner was originally designed for was simply to provide an inventory of the hosts communicating on a network.

How it all started

So, why did I start designing a passive asset detection system when I could just as well have used a port scanner like Nmap to fingerprint the devices on a network? Well, I was working with IT security at the R&D department of a major European energy company at the time. As part of my job I occasionally performed IT security audits of power plants. During these audits I typically wanted to ensure that there were no rouge or unknown devices on the network. The normal way of verifying this would be to perform an Nmap scan of the network, but that wasn’t an option for me since I was dealing with live industrial control system networks. I knew from personal experience that a network scan could cause some of the industrial control system devices to drop their network connections or even crash, so active scanning wasn’t a viable option. Instead I chose to setup a SPAN port at a central point of the network, or even install a network TAP, and then capture network traffic to a PCAP file during a few hours. I found the PCAP files being a great source, not only for identifying the hosts present at a network, but also in order to discover misconfigured devices. However, I wasn’t really happy with the tools available for visualizing the devices on the network, which is why I stated developing NetworkMiner in my spare time.

Network Forensics

As I continued improving NetworkMiner I pretty soon ended up writing my own TCP reassembly engine as well as parsers for HTTP and the CIFS protocol (a.k.a SMB). With these protocols in place I was able to extract files downloaded through HTTP or SMB to disk with NetworkMiner, which turned out to be a killer feature.

Monthly downloads of NetworkMiner from SourceForge
Image: Monthly downloads of NetworkMiner from SourceForge

With the ability to extract file transfers from PCAP files NetworkMiner steadily gained popularity as a valuable tool in the field of network forensics, which motivated me to make the tool even better. Throughout these past 10 years I have single-handedly implemented over 60 protocols in NetworkMiner, which has been a great learning experience for me.

NetworkMiner Milestones

Looking Forward

People sometimes ask me what I’m planning to add to the next version of NetworkMiner. To be honest; I never really know. In fact, I’ve realized that those with the best ideas for features or protocols to add to NetworkMiner are those who use NetworkMiner as part of their jobs, such as incident responders and digital forensics experts across the globe.

I therefore highly value feedback from users, so if you have requests for new features to be added to the next version, then please feel free to reach out and let me know!

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Enable file extraction from PCAP with NetworkMiner in six steps

$
0
0

NetworkMiner can reassemble files transferred over protocols such as HTTP, FTP, TFTP, SMB, SMB2, SMTP, POP3 and IMAP simply by reading a PCAP file. NetworkMiner stores the extracted files in a directory called “AssembledFiles” inside of the NetworkMiner directory.

NetworkMiner 2.1.1 with Files tab open.
Files extracted by NetworkMiner from the DFRWS 2008 challenge file suspect.pcap

NetworkMiner is a portable tool that is delivered as a zip file. The tool doesn’t require any installation, you simply just extract the zip file to your PC. We don’t provide any official guidance regarding where to place NetworkMiner, users are free to place it wherever they find it most fitting. Some put the tool on the Desktop or in “My Documents” while others prefer to put it in “C:\Program Files”. However, please note that normal users usually don’t have write permissions to sub-directories of %programfiles%, which will prevent NetworkMiner from performing file reassembly.

Unfortunately, previous versions of NetworkMiner didn’t alert the user when it failed to write to the AssembledFiles directory. This means that the tool would silently fail to extract any files from a PCAP file. This behavior has been changed with the release of NetworkMiner 2.1. Now the user gets a windows titled “Insufficient Write Permissions” with a text like this:

User is unauthorized to access the following file:
C:\Program Files\NetworkMiner_2-1-1\AssembledFiles\cache\FILENAME

File(s) will not be extracted!

Follow these steps to set adequate write permissions to the AssembledFiles directory in Windows:

  1. Open the Properties window for the AssembledFiles directory
  2. Open the “Security” tab
  3. Press “Edit” to change permissions
  4. Select the user who will be running NetworkMiner
  5. Check the “Allow”checkbox for Write permissions
  6. Press the OK button
Press Edit to change permissions for AssembledFiles folder

If you are running NetworkMiner under macOS (OS X) or Linux, then please make sure to follow ourinstallation instructions, which include this command:

sudo chmod -R go+w AssembledFiles/

Once you have set up the appropriate write permissions you should be able to start NeworkMiner and open a PCAP file in order to have the tool automatically extract files from the captured network traffic.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

CapLoader 1.5 Released

$
0
0
CapLoader 1.5 Logo

We are today happy to announce the release of CapLoader 1.5. This new version of CapLoader parses pcap and pcap-ng files even faster than before and comes with new features, such as a built-in TCP stream reassembly engine, as well as support for Linux and macOS.

Support for ICMP Flows

CapLoader is designed to group packets together that belong to the same bi-directional flow, i.e. all UDP, TCP and SCTP packets with the same 5-tuple (regardless of direction) are considered being part of the same flow.

5-tuple
/fʌɪv ˈtjuːp(ə)l/
noun

A combination of source IP, destination IP, source port, destination port and transport protocol (TCP/UDP/SCTP) used to uniquely identify a flow or layer 4 session in computer networking.

The flow concept in CapLoader 1.5 has been extended to also include ICMP. Since there are no port numbers in the ICMP protocol CapLoader sets the source and destination port of ICMP flows to 0. The addition of ICMP in CapLoader also allows input filters and display filters like “icmp” to be leveraged.

Flows tab in CapLoader 1.5 with display filter BPF 'icmp'
Image: CapLoader 1.5 showing only ICMP flows due to display filter 'icmp'.

TCP Stream Reassembly

One of the foundations for making CapLoader a super fast tool for reading and filtering PCAP files is that it doesn’t attempt to reassemble TCP streams. This means that CapLoader’s Transcript view will show out-of-order segments in the order they were received and retransmitted segments will be displayed twice.

The basic concept has been to let other tools do the TCP reassembly, for example by exporting a PCAP for a flow from CapLoader toWireshark or NetworkMiner.

The steps required to reassemble a TCP stream to disk with Wireshark are:

  1. Right-click a TCP packet in the TCP session of interest.
  2. Select “Follow > TCP Stream”.
  3. Choose direction in the first drop-down-list (client-to-server or server-to-client).
  4. Change format from “ASCII” to “Raw” in the next drop-down-menu.
  5. Press the “Save as...” button to save the reassembled TCP stream to disk.
Follow TCP Stream window in Wireshark

Unfortunately Wireshark fails to properly reassemble some TCP streams. As an example the current stable release of Wireshark (version 2.2.5) showsduplicate data in “Follow TCP Stream” when there are retransmissions with partially overlapping segments. We have also noticed some additional  bugs related to TCP stream reassembly in other recent releases of Wireshark. However, we’d like to stress that Wireshark does perform a correct reassembly of most TCP streams; it is only in some specific situations that Wireshark produces a broken reassembly. Unfortunately a minor bug like this can cause serious consequences, for example when the TCP stream is analyzed as part of a digital forensics investigation or when the extracted data is being used as input for further processing. We have therefore decided to include a TCP stream reassembly engine in CapLoader 1.5. The steps required to reassemble a TCP stream in CapLoader are:

  1. Double click a TCP flow of interest in the “Flows” tab to open a flow transcript.
  2. Click the “Save Client Byte Stream” or “Save Server Byte Stream” button to save the data stream for the desired direction to disk.
Transcript window in CapLoader 1-5

Extracting TCP streams from PCAP files this way not only ensures that the data stream is correctly reassembled, it is also both faster and simpler than having to pivot through Wireshark’s Follow TCP Stream feature.

PCAP Icon Context Menu

CapLoader 1.5 PCAP icon Save As...

The PCAP icon in CapLoader is designed to allow easy drag-and-drop operations in order to open a set of selected flows in an external packet analysis tool, such asWireshark or NetworkMiner. Right-clicking this PCAP icon will bring up a context menu, which can be used to open a PCAP with the selected flows in an external tool or copy the PCAP to the clipboard. This context menu has been extended in CapLoader 1.5 to also include a “Save As” option. Previous versions of CapLoader required the user to drag-and-drop from the PCAP icon to a folder in order to save filtered PCAP data to disk.

Faster Parsing with Protocol Identification

CapLoader can identify over 100 different application layer protocols, including HTTP, SSL, SSH, RTP, RTCP and SOCKS, without relying on port numbers. The protocol identification has previously slowed down the analysis quite a bit, which has caused many users to disable this powerful feature. This new release of of CapLoader comes with an improved implementation of the port-independent protocol identification feature, which enables PCAP files to be loaded twice as fast as before with the “Identify protocols” feature enabled.

Works in Linux and macOS

One major improvement in CapLoader 1.5 is that this release is compatible with theMono framework, which makes CapLoader platform independent. This means that you can now run CapLoader on your Mac or Linux machine if you have Mono installed. Please refer to our previous blog posts about how to run NetworkMiner in various flavors of Linux and macOS to find out how to install Mono on your computer. You will, however, notice a performance hit when running CapLoader under Mono instead of using Windows since the Mono framework isn't yet as fast as Microsoft's .NET Framework.

CapLoader 1.5 running in Linux with Mono
Image: CapLoader 1.5 running in Linux (Xubuntu).

Credits

We’d like to thank Sooraj for reporting a bug in the “Open With” context menu of CapLoader’s PCAP icon. This bug has been fixed in CapLoader 1.5 and Sooraj has been awarded an official “PCAP or it didn’t happen” t-shirt for reporting the bug.

PCAP or it didn't happen t-shirt
Image: PCAP or it didn't happen t-shirt

Have a look at our Bug Bounty Program if you also wanna get a PCAP t-shirt!

Downloading CapLoader 1.5

Everything mentioned in this blog post, except for the protocol identification feature, is available in our free trial version of CapLoader. To try it out, simply grab a copy here:
https://www.netresec.com/?page=CapLoader#trial (no registration needed)

All paying customers with an older version of CapLoader can download a free update to version 1.5 from ourcustomer portal.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Domain Whitelist Benchmark: Alexa vs Umbrella

$
0
0
Alexa vs Umbrella

In November last year Alexa admitted in a tweet that they had stopped releasing their CSV file with the one million most popular domains.

Yes, the top 1m sites file has been retired

Members of the Internet measurement and infosec research communities were outraged, surprised and disappointed since this domain list had become the de-facto tool for evaluating the popularity of a domain. As a result of this Cisco Umbrella (previously OpenDNS)released a free top 1 million list of their own in December the same year. However, by then Alexa had already announced that their “top-1m.csv” file was back up again.

The file is back for now. We'll post an update before it changes again.

The Alexa list was unavailable for just about a week but this was enough for many researchers, developers and security professionals to make the move to alternative lists, such as the one from Umbrella. This move was perhaps fueled by Alexa saying that the “file is back for now”, which hints that they might decide to remove it again later on.

We’ve been leveraging the Alexa list for quite some time in NetworkMiner and CapLoader in order to do DNS whitelisting, for example when doingthreat hunting with Rinse-Repeat. But we haven’t made the move from Alexa to Umbrella, at least not yet.


Malware Domains in the Top 1 Million Lists

Threat hunting expert Veronica Valeros recently pointed out that there are a great deal of malicious domains in the Alexa top one million list.

Researchers using Alexa top 1M as legit, you may want to think twice about that. You'd be surprised how many malicious domains end there.

I often recommend analysts to use the Alexa list as a whitelist to remove “normal” web surfing from their PCAP dataset when doing threat hunting or network forensics. And, as previously mentioned, both NetworkMiner and CapLoader make use of the Alexa list in order to simply domain whitelisting. I therefore decided to evaluate just how many malicious domains there are in the Alexa and Umbrella lists.

hpHosts EMD (by Malwarebytes)

AlexaUmbrella
Whitelisted malicious domains:13651458
Percent of malicious domains whitelisted:0.89%0.95%

Malware Domain Blocklist

AlexaUmbrella
Whitelisted malicious domains:8463
Percent of malicious domains whitelisted:0.46%0.34%

CyberCrime Tracker

AlexaUmbrella
Whitelisted malicious domains:1510
Percent of malicious domains whitelisted:0.19%0.13%

The results presented above indicate that Alexa and Umbrella both contain roughly the same number of malicious domains. The percentages also reveal that using Alexa or Umbrella as a whitelist, i.e. ignore all traffic to the top one million domains, might result in ignoring up to 1% of the traffic going to malicious domains. I guess this is an acceptable number of false negatives since techniques likeRinse-Repeat Intrusion Detection isn’t intended to replace traditional intrusion detection systems, instead it is meant to be use as a complement in order to hunt down the intrusions that your IDS failed to detect. Working on a reduced dataset containing 99% of the malicious traffic is an acceptable price to pay for having removed all the “normal” traffic going to the one million most popular domains.


Sub Domains

One significat difference between the two lists is that the Umbrella list contains subdomains (such as www.google.com, safebrowsing.google.com and accounts.google.com) while the Alexa list only contains main domains (like “google.com”). In fact, the Umbrella list contains over 1800 subdomains for google.com alone! This means that the Umbrella list in practice contains fewer main domains compared to the one million main domains in the Alexa list. We estimate that roughly half of the domains in the Umbrella list are redundant if you only are interested in main domains. However, having sub domains can be an asset if you need to match the full domain name rather than just the main domain name.


Data Sources used to Compile the Lists

The Alexa Extension for Firefox
Image: The Alexa Extension for Firefox

The two lists are compiled in different ways, which can be important to be aware of depending on what type of traffic you are analyzing. Alexa primarily receives web browsing data from users who have installed one of Alexa’s many browser extensions (such as the Alexa browser toolbar shown above). They also gather additional data from users visiting web sites that include Alexa’s tracker script.

Cisco Umbrella, on the other hand, compile their data from“the actual world-wide usage of domains by Umbrella global network users”. We’re guessing this means building statistics from DNS queries sent through the OpenDNS service that was recently acquired by Cisco.

This means that the Alexa list might be better suited if you are only analyzing HTTP traffic from web browsers, while the Umbrella list probably is the best choice if you are analyzing non-HTTP traffic or HTTP traffic that isn’t generated by browsers (for example HTTP API communication).


Other Quirks

As noted by Greg Ferro, the Umbrella list contains test domains like “www.example.com”. These domains are not present in the Alexa list.

We have also noticed that the Umbrella list contains several domains withnon-authorized gTLDs, such as “.home”, “.mail” and “.corp”. The Alexa list, on the other hand, only seem to contain real domain names.


Resources and Raw Data

Both the Alexa and Cisco Umbrella top one million lists are CSV files named “top-1m.csv”. The CSV files can be downloaded from these URL’s:

The analysis results presented in this blog post are based on top-1m.csv files downloaded from Alexa and Umbrella on March 31, 2017. The malware domain lists were also downloaded from the three respective sources on that same day.

We have decided to share the “false negatives” (malware domains that were present in the Alexa and Umbrella lists) for transparency. You can download the lists with all false negatives from here:
https://www.netresec.com/files/alexa-umbrella-malware-domains_170331.zip


Hands-on Practice and Training

If you wanna learn more about how a list of common domains can be used to hunt down intrusions in your network, then please register for one of our network forensic trainings. The next training will be a pre-conference training at 44CON in London.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Network Forensics Training in London

$
0
0
The Flag of the United States by Sam Howzit (CC BY 2.0)

People sometimes ask me when I will teach mynetwork forensics class in the United States. The US is undoubtedly the country with the most advanced and mature DFIR community, so it would be awesome to be able to give my class there. However, not being a U.S. person and not working for a U.S. company makes it rather difficult for me to teach in the United States (remember what happened to Halvar Flake?).

So if you’re from the Americas and would like to take my network forensics class, then please don’t wait for me to teach my class at a venue close to you – because I probably won’t. My recommendation is that you instead attend my upcoming training at 44CON in London this September.

London Red Telephone Booth Long Exposure by negativespace.co (CC0)

The network forensics training in London will cover topics such as:

  • Analyzing a web defacement
  • Investigating traffic from a remote access trojan (njRAT)
  • Analyzing a Man-on-the-Side attack (much like QUANTUM INSERT)
  • Finding a backdoored application
  • Identifying botnet traffic through whitelisting
  • Rinse-Repeat Threat Hunting

The first day of training will focus on analysis using only open source tools. The second day will primarily cover training on commercial software from Netresec, i.e.NetworkMiner Professional andCapLoader. All students enrolling in the class will get a full 6 month license for both these commercial tools.

NetworkMinerCapLoader

Hope to see you at the 44CON training in London!

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

NetworkMiner 2.2 Released

$
0
0
NetworkMiner 2.2 = Harder Better Faster Stronger

NetworkMiner 2.2 is faster, better and stronger than ever before! The PCAP parsing speed has more than doubled and even more details are now extracted from analyzed packet capture files.

The improved parsing speed of NetworkMiner 2.2 can be enjoyed regardless if NetworkMiner is run in Windows orLinux, additionally the user interface is more responsive and flickers way less when capture files are being loaded.

User Interface Improvements

The keyword filter available in the Files, Messages, Sessions, DNS and Parameters tabs has been improved so that the rows now can be filtered on a single column of choice by selecting the desired column in a drop-down list. There is also an “Any column” option, which can be used to search for the keyword in all columns.

Keyword drop-down in NetworkMiner's Parameters tab

The Messages tab has also received an additional feature, which allows the filter keyword to be matched against the text in the message body as well as email headers when the “Any column” option is selected. This allows for an efficient analysis of messages (such as emails sent/received through SMTP, POP3 and IMAP as well as IRC messages and some HTTP based messaging platforms), since the messages can be filtered just like in a normal e-mail client.

We have also given up on using local timestamp formats; timestamps are now instead shown using theyyyy-MM-dd HH:mm:ss format with time zone explicitly stated.

Protocol Parsers

NetworkMiner 2.2 comes with a parser for the Remote Desktop Protocol (RDP), which rides on top of COTP and TPKT. The RDP parser is primarily used in order to extract usernames from RDP cookies and show them on the Credentials tab. This new version also comes with better extraction of SMB1 and SMB2 details, such as NTLM SSP usernames.

RDP Cookies extracted with NetworkMiner 2.2

One big change that has been made behind the scenes of NetworkMiner is the move from .NET Framework 2.0 to version 4.0. This move doesn’t require any special measures to be taken for most Microsoft Windows users since the 4.0 Framework is typically already installed on these machines. If you’re running NetworkMiner in Linux, however, you might wanna check out our updated blog post on how to install NetworkMiner in Linux.

We have also added an automatic check for new versions of NetworkMiner, which runs every time the tool is started. This update check can be disabled by adding a --noupdatecheck switch to the command line when starting NetworkMiner.

NetworkMiner.exe --noupdatecheck capturefile.pcap

NetworkMiner Professional

Even though NetworkMiner 2.2 now uses ISO-like time representations NetworkMiner still has to decide which time zone to use for the timestamps. The default decision has always been to use the same time zone as the local machine, but NetworkMiner Professional now additionally comes with an option that allows the user to select whether to use UTC (as nature intended), the local time zone or some other custom time zone for displaying timestamps. The time zone setting can be found in the“Tools > Settings” menu.

The Port-Independent-Protocol-Detection (PIPI) feature in NetworkMiner Pro has been improved for more reliable identification of HTTP, SSH, SOCKS, FTP and SSL sessions running on non-standard port numbers.

CASE / JSON-LD Export

We are happy to announce that the professional edition of NetworkMiner 2.2 now has support for exporting extracted details using theCyber-investigation Analysis Standard Expression (CASE) format, which is a JSON-LD format for digital forensics data. The CASE export is also available in thecommand line tool NetworkMinerCLI.

We would like to thank Europol for recommending us to implement the CASE export format in theireffort to adopt CASE as a standard digital forensic format. Several other companies in the digital forensics field are currently looking into implementing CASE in their tools, including AccessData, Cellebrite, Guidance, Volatility and XRY. We believe the CASE format will become a popular format for exchanging digital forensic data between tools for digital forensics, log correlation and SIEM solutions.

We will, however, still continue supporting and maintaining the CSV and XML export formats in NetworkMiner Professional and NetworkMinerCLI alongside the new CASE format.

Credits

I would like to thank Sebastian Gebhard and Clinton Page for reporting bugs in the Credentials tab and TFTP parsing code that now have been fixed. I would also like to thankJeff Carrell for providing acapture file that has been used to debug an issue in NetworkMiner’s OpenFlow parser. There are also a couple of users who have suggested new features that have made it into this release of NetworkMiner. Marc Lindike suggested the powerful deep search of extracted messages and Niclas Hirschfeld proposed a new option in thePCAP-over-IP functionality that allows NetworkMiner to receive PCAP data via a remote netcat listener.

Upgrading to Version 2.2

Users who have purchased a license for NetworkMiner Professional 2.x can download a free update to version 2.2 from ourcustomer portal.

Those who instead prefer to use the free and open source version can grab the latest version of NetworkMiner from the officialNetworkMiner page.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Hunting AdwindRAT with SSL Heuristics

$
0
0

An increasing number of malware families employ SSL/TLS encryption in order to evade detection by Network Intrusion Detection Systems (NIDS). In this blog post I’m gonna have a look at Adwind, which is a cross-platform Remote Access Trojan (RAT) that has been using SSL to conceal it’s traffic for several years. AdwindRAT typically connects SSL sessions to seemingly random TCP ports on the C2 servers. Hence, a heuristic that could potentially be used to hunt for Adwind RAT malware is to look for SSL traffic going to TCP ports that normally don’t use SSL. However, relying on ONLY that heuristic would generate way too many false positives.

Brad Duncan did aninteresting writeup about Adwind RAT back in 2015, where he wrote:

I saw the same certificate information used last week, and it continues this week.
  • commonName = assylias
  • organizationName = assylias.Inc
  • countryName = FR
Currently, this may be the best way to identify Adwind-based post-infection traffic. Look for SSL traffic on a non-standard TCP port using that particular certificate.

Unfortunately, Adwind RAT has evolved to use other CN’s in their new certificates, so looking for “assylias.Inc” will not cut it anymore. However, looking for SSL traffic on non-standard TCP ports still holds on the latest Adwind RAT samples that we’ve analyzed.

The PT Research Attack Detection Team (ADT) sent anemail with IDS signatures for detecting AdwindRAT to the Emerging-Sigs mailing list a few days ago, where they wrote:

“We offer one of the ways to detect malicious AdwindRAT software inside the encrypted traffic. Recently, the detection of this malicious program in network traffic is significantly reduced due to encryption. As a result of the research, a stable structure of data fragments was created.”

Not only is it awesome that they were able to detect static patterns in the encrypted data, they also provided 25 PCAP files containing AdwindRAT traffic. I loaded these PCAP files into NetworkMiner Professional in order to have a look at the X.509 certificates. NetworkMiner Professional supports Port-Independent Protocol Identification (PIPI), which means that it will automatically identify the C2 sessions as SSL, regardless of which port that is used. It will also automatically extract the X.509 certificates along with any other parameters that can be extracted from the SSL handshake before the session goes encrypted.

X.509 certificates extracted from AdwindRAT PCAP by NetworkMinerImage: Files extracted from ADT’s PCAP files that mach “Oracle” and “cer”.

In this recent campaign the attackers used X.509 certificates claiming to be from Oracle. The majory of the extracted certificates were exactly 1237 bytes long, so maybe they’re all identical? This is what the first extracted X.509 certificate looks like:

Self-signed Oracle America, Inc. X.509 certificate

The cert claims to be valid for a whopping 100 years!

Self-signed Oracle America, Inc. X.509 certificate

Self-signed, not trusted.

However, after opening a few of the other certificates it's clear that each C2 server is using a unique X.509 certificate. This can be quickly confirmed by opening the parameters tab in NetworkMiner Pro and showing only the Certificate Hash or Subject Key Identifier values.

NetworkMiner Parameters tab showing Certificate Hash values Image: Certificate Hash values found in Adwind RAT’s SSL traffic

I also noted that the CN of the certificates isn’t constant either; these samples use CN’s such as “Oracle America”, “Oracle Tanzania”, “Oracle Arusha Inc.”, “Oracle Leonardo” and “Oracle Heaven”.

The CN field is normally used to specify which domain(s) the certificate is valid for, together with any additinoal Subject Alternative Name field. However, Adwind RAT’s certificates don’t contain any domain name in the CN field and they don’t have an Alternative Name record. This might very well change in future versions of this piece of malware though, but I don’t expect the malware authors to generate a certificate with a CN matching the domain name used by each C2 server. I can therefore use this assumption in order to better hunt for Adwind RAT traffic.

But how do I know what public domain name the C2 server has? One solution is to use passive DNS, i.e. tocapture all DNS traffic in order to do passive lookups locally. Another solution is to leverage the fact that the Adwind RAT clients use the Server Name Indication (SNI) when connecting to the C2 servers.

TLS Server Name (aka SNI) and Subject CN values don’t match for AdwindRAT Image: TLS Server Name (aka SNI) and Subject CN values don’t match for AdwindRAT

TLS Server Name (SNI) with matching Subject CN from Google Image: TLS Server Name (SNI) with matching Subject CN from Google.

My conclusion is therefore that Brad’s recommendations from 2015 are still pretty okay, even for the latest wave of Adwind RAT traffic. However, instead of looking for a fix CN string I’d prefer to use the following heuristics to hunt for this type of C2 traffic:

  • SSL traffic to non-standard SSL port
  • Self signed X.509 certificate
  • The SNI domain name in the Client Hello message does not match the CN or Subject Alternative Name of the certificate.

These heuristics will match more than just Adwind RAT traffic though. You’ll find that the exact same heuristics will also help identify other pieces of SSL-enabled malware as well as Tor traffic.

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

CapLoader 1.6 Released

$
0
0
CapLoader 1.6

CapLoader is designed to simplify complex tasks, such as digging through gigabytes of PCAP data looking for traffic that sticks out or shouldn’t be there. Improved usability has therefore been the primary goal, when developing CapLoader 1.6, in order to help our users do their work even more efficiently than before.

Some of the new features in CapLoader 1.6 are:

  • Context aware selection and filter suggestions when right-clicking a flow, session or host.
  • Support for IPv6 addresses in the BPF syntax forInput Filter as well as Display Filter.
  • Flows that are inactive for more than 60 minutes are considered closed. This timeout is configurable in Tools > Settings.

Latency Measurements

CapLoader 1.6 also introduces a new column in the Flows tab labeled “Initial_RTT”, which shows the Round Trip Time (RTT) measured during the start of a session. The RTT is defined as“the time it takes for a signal to be sent plus the time it takes for an acknowledgment of that signal to be received”. RTT is often called “ping time” because the ping utility computes the RTT by sending ICMP echo requests and measuring the delay until a reply is received.

Initial RTT in CapLoader Flows Tab
Image: CapLoader 1.6 showing ICMP and TCP round trip times.

But using a PCAP file to measure the RTT between two hosts isn’t as straight forward as one might think. One complicating factor is that the PCAP might be generated by the client, server or by any device in between. If we know that the sniffing point is at the client then things are simple, because we can then use the delta-time between an ICMP echo request and the returning ICMP echo response as RTT. In lack of ping traffic the same thing can be achieved with TCP by measuring the time between a SYN and the returning SYN+ACK packet. However, consider the situation when the sniffer is located somewhere between the client and server. The previously mentioned method would then ignore the latency between the client and sniffer, the delta-time will therefore only show the RTT between the sniffer and the server.

This problem is best solved by calculating the Initial RTT (iRTT) as the delta-time between the SYN packet and the final ACK packet in a TCP three-way handshake, as shown here:

Initial Round Trip Time in PCAP Explained
Image: Initial RTT is the total time of the black/bold packet traversal paths.

Jasper Bongertz does a great job of explaining why and how to use the iRTT in his blog post“Determining TCP Initial Round Trip Time”, so I will not cover it in any more detail here. However, keep in mind that iRTT can only be calculated this way for TCP sessions. CapLoader therefore falls back on measuring the delta time between the first packet in each direction when it comes to transport protocols like UDP and ICMP.


Exclusive Features Not Available in the Free Trail

The new features mentioned so far are all available in the free 30 day CapLoader trial, which can be downloaded from ourCapLoader product page (no registration required). But we’ve also added features that are only available in the commercial/professional edition of CapLoader. One such exclusive feature is the matching of hostnames against the Cisco Umbrella top 1 million domain list. CapLoader already had a feature for matching domain names against the Alexa top 1 million list, so the addition of the Umbrella list might seem redundant. But it’s actually not, the two lists are compiled using different data sources and therefore complement each other (see our blog post “Domain Whitelist Benchmark: Alexa vs Umbrella” for more details). Also, the Umbrella list contains subdomains (such as www.google.com, safebrowsing.google.com and accounts.google.com) while the Alexa list only contains main domains (like “google.com”). CapLoader can therefore do more fine-granular domain matching with the Umbrella list (requiring a full match of the Umbrella domain), while the Alexa list enables a more rough “catch ‘em all” approach (allowing *.google.com to be matched).

CapLoader Hosts tab with ASN, Alexa and Umbrella details

CapLoader 1.6 also comes with an ASN lookup feature, which presents the autonomous system number (ASN) and organization name for IPv4 and IPv6 addresses in a PCAP file (see image above). The ASN lookup is built using the GeoLite database created byMaxMind. The information gained from the MaxMind ASN database is also used to provide intelligent display filter CIDR suggestions in the context menu that pops up when right-clicking a flow, service or host.

CapLoader Flows tab with context menu for Apply as Display Filter
Image: Context menu suggests Display Filter BPF “net 104.84.152.0/17” based on the server IP in the right-clicked flow.

Users who have previously purchased a license for CapLoader can download a free update to version 1.6 from our customer portal.


Credits and T-shirts

We’d like to thank Christian Reusch for suggesting the Initial RTT feature and Daan from the Dutch Ministry of Defence for suggesting the ASN lookup feature. We’d also like to thank David Billa, Ran Tohar Braun and Stephen Bell for discovering and reporting bugs in CapLoader which now have been fixed. These three guys have received a “PCAP or it didn’t happen” t-shirt as promised in our Bug Bounty Program.

Got a t-shirt for crashing CapLoader

If you too wanna express your view ofoutlandish cyber attack claims without evidence, then please feel free to send us your bug reports and get rewarded with a “PCAP or it didn’t happen” t-shirt!

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Don't Delete PCAP Files - Trim Them!

$
0
0

We are happy to release TrimPCAP today! TrimPCAP is a free open source tool that reduces the size of capture files in an intelligent way.

TrimPCAP

The retention period of a packet capture solution is typically limited by either legal requirements or available disk space. In the latter case the oldest capture files are simply removed when the storage starts getting full. This means that if there is a long ongoing session, such as a download of a large ISO file, streamed video or a reverse shell backdoor, then the start of this session will likely be removed.

I know from experience that it’s painful to analyze network traffic where the start of a session is missing. The most important and interesting stuff generally happens in the beginning of each session, such as the HTTP GET request for an ISO file. As an analyst you don’t need to look at all the other packets in that ISO download (unless you believe the ISO contains malware), it’s enough to see that there is a GET request for the file and the server responds with a “200 OK”.

CapLoader Transcript of ISO downloadImage: CapLoader transcript of an ISO download

If that download had been truncated, so that only the last few packets were remaining, then it would be really difficult to know what was being downloaded. The same is true also for other protocols, including proprietary C2 protocols used by botnets and other types of malware.


  TrimPCAP  

TrimPCAP is designed to overcome the issue with truncated sessions by removing data from the end of sessions rather than from the beginning. This also comes with a great bonus when it comes to saving on disk usage, since the majority of the bytes transferred across the Internetare made up of big sessions (a.k.a “Elephant Flows”). Thus, by trimming a PCAP file so that it only contains the first 100kB of each TCP and UDP session it’s possible to significantly reduce required storage for that data.

The following command reduces the PCAP dataset used in ourNetwork Forensics Training from 2.25 GB to just 223 MB:

user@so$ python trimpcap.py 102400 /nsm/sensor_data/so-eth1/dailylogs/*/*
Trimming capture files to max 102400 bytes per flow.
Dataset reduced by 90.30% = 2181478771 bytes
user@so$

A maximum session size (or "flow cutoff") of 100kB enables trimpcap.py to reduce the required storage for that dataset to about 10% of its original size, which will significantly extend the maximum retention period.


Putting TrimPCAP Into Practice

Let’s assume that your organization currently has a maximum full content PCAP retention period of 10 days and that trimming sessions to 100kB reduces the required storage to 10%. TrimPCAP will then enable you to store 5 days with full session data plus 50 additional days with trimmed sessions using the same disk space as before!

TrimPCAP retention period extension

A slightly more advanced scheme would be to have multiple trim limits, such as trimming to 1MB after 3 days, 100kB after 6 days and 10kB after 30 days. Such a setup would probably extend your total retention period from 10 days to over 100 days.

An even more advanced trimming scheme is implemented in our packet capture agentPacketCache. PacketCache constantly trims its PCAP dataset because it is designed to use only 1 percent of a PC’s RAM to store observed packets, in case they are needed later on for incident response. PacketCache uses a trim limit which declines individually for each observed TCP and UDP session depending on when they were last observed.


Downloading TrimPCAP

TrimPCAP is open source software and is released under the GNU General Public License version 2 (GPLv2). You can download your own copy of TrimPCAP from the official TrimPCAP page:
https://www.netresec.com/?page=TrimPCAP

Happy trimming!

✂- - - - - - - - -

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com

Zyklon Malware Network Forensics Video Tutorial

$
0
0
We are releasing a series of network forensics video tutorials throughout the next few weeks. First up is this analysis of a PCAP file containing network traffic from the "Zyklon H.T.T.P." malware.

Analyzing a Zyklon Trojan with Suricata and NetworkMiner

Resources
https://www.malware-traffic-analysis.net/2017/07/22/index.html
https://github.com/Security-Onion-Solutions/security-onion
https://www.arbornetworks.com/blog/asert/wp-content/uploads/2017/05/zyklon_season.pdf
http://doc.emergingthreats.net/2017930

IOCs
service.tellepizza.com
104.18.40.172
104.18.41.172
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.3pre) Gecko/20070302 BonEcho/2.0.0.3pre
gate.php
.onion
98:1F:D2:FF:DC:16:B2:30:1F:11:70:82:3D:2E:A5:DC
65:8A:5C:76:98:A9:1D:66:B4:CB:9D:43:5C:DE:AD:22:38:37:F3:9C
E2:50:35:81:9F:D5:30:E1:CE:09:5D:9F:64:75:15:0F:91:16:12:02:2F:AF:DE:08:4A:A3:5F:E6:5B:88:37:D6

Facebook Share on Facebook  Twitter Tweet  Reddit Submit to reddit.com
Viewing all 160 articles
Browse latest View live