3 Sep 2013

Packetloop acquired by Arbor Networks

0 comments Permalink Tuesday, September 03, 2013
Today we are thrilled to announce that Packetloop has agreed to become part of Arbor Networks. We could not have found a better partner than Arbor, who are an icon in the security industry, leading the way in network traffic management and DDoS mitigation solutions. I say partner, because we see this relationship as more than just an acquisition. It is bringing together two powerful and complimentary security solutions and two organizations who are incredibly similar in every way despite the obvious difference in size. Let me explain.

When we started Packetloop back in 2011, we set out with a singular goal, to create a platform that would change the way the industry performed security analytics. Not just any platform, but one that we would be happy to use ourselves on a daily basis as security consultants.  We also listened to what our customers and peers were saying, the challenges they were facing and the questions they had that were not being answered. The platform also had to solve what no one else had yet achieved; how to process and present network packet captures on a massive scale, from anywhere in the network. This meant processing terabytes of data representing months of network traffic. All of these elements went into the development of Packetloop and drove a cycle of constant innovation that led to us having the first Cloud based Big Data Security Analytics platform in the market.

On this journey we spoke to a lot of different people and got plenty of feedback from different organizations. Everyone had a view on what we were doing, how we could do it better, whom we should partner with and what was missing from the platform. We never let this shift the focus from ensuring we put the best features into the platform, even if this meant delaying a release or re-engineering part of the platform to find more performance or simply present the data in a better way.

But it was our initial meetings with Arbor where we first truly met a group of like-minded security professionals, who fully understood what we were trying to achieve, appreciated the enormity of what we had created and could see the future in our feature roadmap. As we were introduced to the wider Arbor team, we discovered a company culture identical to our own. A team with a sense of focus, committed and passionate about achieving their goals, who had been able to maintain that startup feel. Arbor threw down a challenge to us. Did we really want to build something big? Did we want to revolutionize the security analytics market? With this challenge (building and changing the market) and understanding where Packetloop would fit alongside Peakflow and Pravail, and more importantly how the team would fit into Arbor, completing the deal that led to this announcement today just made sense. See what acquiring Packetloop means to Arbor and you will understand the significance of this deal.

So what does this mean for Packetloop? We have a world-class Big Data Security Analytics platform that now has the backing of an incredible engineering and support team. We will now be represented by a global sales and consulting presence, and we will have access to the ATLAS Intelligence Feed and ASERT research teams. Most importantly for us, Arbor’s belief in what we have achieved so far means that they will invest in further growth of the Packetloop development team locally, establishing an R&D centre of excellence based in Sydney, Australia. This is incredible news for the Australian information security industry and Australian startups in general, and further proof that Arbor is a truly global organization. We look forward to the months ahead where we will integrate Packetloop with the Arbor product suite, and bring to the industry an even more revolutionary suite of security analytics tools that will change the way the people think about Advanced Threat Mitigation.

As this will be the last blog post under the Packetloop banner, I would like to take this opportunity to thank all of our early access and beta users, customers and free users, blog readers, people who followed us on social media and everyone who turned up at conferences to hear us speak about Big Data Security Analytics. Your feedback and continued support is what drives us and is what makes us keep innovating and delivering the best platform we can. Most importantly I need to thank the Packetloop team, whose incredible efforts have made this announcement possible today.

For more information on Arbor and the acquisition, please see the full press release on the Arbor Networks.

Scott Crane
CEO and Co-Founder

15 May 2013

Manipulating Time and Space - Big Data Security Analytics and the Kill Chain

0 comments Permalink Wednesday, May 15, 2013

In preparing for speaking at the Auscert Conference next week I kept thinking about the 'Promise' of Big Data and Security and this includes it's future potential as well as the hope it brings to deliver the next generation and evolution of security products. I am also mindful that part of the audience will think that Big Data is just hype or buzz and as it enters the trough of disillusionment it will be absorbed, quickly forgotten.

Then I re-read the Mandiant APT1 report and posts related to the tactics used then and now. Normally at this time I remember something that Scott Crawford and I have discussed months before and I get a Tada! moment. Big Data Security Analytics is the first technology capable of disrupting the lateral movements of attackers. This is often referred to as the attack lifecycle or the kill chain.

Attack Life Cycles (Kill Chain)

The attack lifecycle or Kill Chain reflects the reality of modern tactics when it comes to a compromise. Some great references to read more about it are:
  • “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains” by Hutchins, Cloppert and Amin from Lockheed Martin Corporation [PDF]
  • Mandiant's APT1 Report p27 "APT1: Attack Lifecycle" [PDF]
  • "A Case Study of Intelligence-Driven Defense" by Dan Guido.
In “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains” by Hutchins, Cloppert and Amin from Lockheed Martin Corporation the phases are defined as:
  • Reconnaissance
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • Command and Control
  • Actions and Objectives
Breaking the kill chain can be thought of trying to Detect, Deny, Disrupt, Degrade, Deceive or Destroy these phases. 

Invasion Games

The dichotomy of Advanced Persistent Threat (APT) is that we know they are not advanced technically but rather the posture of the attacker is advanced in the sense they are prepared to move/think laterally and easily accomplish their goal. Why is that? What is attacker asymmetry?

I find it best to explain to people as an invasion game which we are all really familiar with. If you aren't familiar with the term there's a quick explanation here. For me the greatest invasion game is Rugby a game where defence has improved significantly over the last decade. Defence dominates the game and good defensive lines can totally stifle an attack.

Invasion games pit Attackers against Defenders. The Attackers have a defined goal and can manipulate or avoid defensive lines. Defensive lines can be thought of as passive (on their heels) or active. Active defenses communicate, cover each other and frustrate - their goal is to disrupt, delay and ultimately repel the attack. 

Attackers and Defenders are both trying to manipulate time and space. This can be thought of as the speed and structure of their attack patterns as well as how broad or spread they are.

In ‘Security’ defensive lines have generally been inactive or passive at best. When faced with a determined Attacker defensive lines are easily stretched, avoided and where there are collisions the Attacker is able to win them. There is little disruption of the attack lifecycle or breaking of the ‘kill chain’. Furthermore collisions are not sought out by Defenders to create a contest.

Great defensive lines communicate, are knowledgable of attack patterns, move fast off the line, collapse when breached, are able to stretch and reset. They seek to encounter and win their interactions. They manipulate time and space by moving forward to meet the attacker and force attackers into predictable lateral moves that are easy to disrupt.

Understanding attack patterns and seeking out and winning collisions denies momentum and this is the disruption of the kill chain.

Manipulating Time and Space

Pioneering the use of Big Data for Security taught us a lot about attackers and attack lifecycles. We are able to enumerate, enrich, link and build context to understand security events from network packet captures. If I want the deep packet inspection information for every indicator and warning I can get it, if I want to track the specific attributes of an attacker (user agent(s) or operating system) I can. There's literally no limit to the information that can be access, extracted, enriched and linked contextually. Threats, Sessions, Protocols and Files / Security, Network and Threat intelligence.

Big Data tooling and NoSQL data models allowed us to manipulate 'Space' and radically changed the nature of 'Time'. You can zoom from years to minutes, you can understand attacks and attackers in incredible detail but you have to wait - maybe it's 7 minutes, maybe it's 15 minutes but this was the trade off for Big Data or so we thought.

To truly create a next generation security technology Big Data Security Analytics needs to disrupt the attack life cycle or kill chain. This means not just solving the Size and Scale problems of network data and security event streams but also doing this in real time.

A real time Big Data Security Analytics system is broad (laterally), seeks out and wins it's collisions e.g. every interaction with the attacker is expected to be biased towards the defender and enables decisions to be made in real time. These decisions relate to the modelling and disruption of the Kill Chain.

While processing at the speed of the stream (network and security event streams) we can't dismiss the incredible amount of knowledge that is delivered after the fact. It's the reason why we are named Packetloop. There's gold in replaying network traffic and reprocessing files. This information is generally the best information for Kill Chain modelling.

Disrupting the Kill Chain in Real Time

In the previous section I mentioned that there are collisions between Defenders and Attackers. These collisions can be thought of as interactions and where there is an interaction I want it biased in my (defender's) favor. The bias is in terms of information and knowledge regarding the current interaction and how it relates to all other interactions.

So suppose you give me a file (via email or something I was tricked into downloading). This interaction of a single file holds so much information that I can use. I have quickly sketched some of these in the diagram below;

The 'Jujutsu' of interactions

So take a file, enrich it, link the information and correlate it based on other information you have from Threats, Sessions and Protocols and you start to see how this could be used to disrupt a Kill Chain. Is it the compile time? Is it the ssdeep hash of the file hidden inside the executable file? Is it a yara signature triggering for shellcode? Is it the emulation of shellcode? Is it the IP address of the web server? the Country it resides in? It's name servers? or the mean distance between those name servers?

When you read the Mandiant APT1 report or similar posts you realise how successful attacks can be when they move laterally. Delivery the file by email, establish C2 communication initially via HTTP (WEBC2) and then later via a more elaborate remote access trojan (RAT). Moving laterally through privilege escalation and further compromise. Data is compressed and encrypted and exfiltrated.

The real time Big Data Security analytics can model this as it happens.
  • The email is processed, mail headers extracted to gain the origination IP address of the sender. The text can be analysed for irregularities and sentiment and the attachment extracted and processed.
  • Pivoting off the attachment can produce a vast amount of information. Are there files embedded inside the attachment? Is the attachment or files within the attachment known malware when compared to VirusTotal or malware database (e.g. MD5/SHA-1/ssdeep).
  • Detonate the attachment in a controlled way using a Malware Sandbox and extract the output communications (DNS, HTTP, IRC, XMPP) for DNS and IP information.
  • Determine based on Session and Protocol information indicating that this communication is an outlier.
  • Correlate all of this information with indicators and warnings produced by threat management systems.
The correlation is not a JOIN on an IP address it is a probabilistic model that is used to make a decision .. but more on this in a future post.

Although my points look simple there is real math and real science (Machine Learning) that can be applied to this task. This is a light year away from traditional classification and correlation. For example take the modelling of entropy for Metasploits Meterpreter - a payload delivered to remotely access a compromised host. 

It's a simple model because I am only looking at two vectors (features). Entropy of the data transmitted and the amount or size of data transmitted. The blue line is the Client to Server entropy and the red dots are Server to Client entropy. This conversation takes place over HTTP and despite some weird URI's it looks like any other conversation to Wireshark.
Client and Server Entropy for a Meterpreter Session over HTTP
When I look at the same conversation in approximately 55K conversations of HTTP conversations you can see how even simple features can be used to find outliers. In the figure below I have graphed Client to Server entropy for 54,189 HTTP conversations and that of the Meterpreter session which is also using HTTP. Meterpreter encrypts all communications between the client and the server and therefore has a very high entropy of almost 8 bits per byte.
The Meterpreter needle in a HTTP haystack


Big Data Security Analytics is the next generation of real time security products and has real applicability in disrupting the attack lifecycle or kill chain. Simple lateral attacks currently thwart defensive lines because of lack of communication and information sharing, there's no brain to contemplate and mitigate attacks.

I have briefly touched on Machine Learning and it's use in Big Data Security Analytics. I will look to focus on it in some future blog posts. Hope you enjoyed this post! If you did let us know!
10 Apr 2013

5 minutes with Threat Analysis

0 comments Permalink Wednesday, April 10, 2013
Packetloop's Threat Analysis feature allows you to step through attacks play by play to accurately confirm indicators of compromise with real evidence. This screencast follows on from my last post "From Indicators of Compromise to Smoking Guns".

As you can see security analysts have the ability to identify and undersand attacks incredibly fast. They can visualize the entire attack timeline and walk through every packet step by step. Using the Advanced Filter they can quickly identify who the source of attack is, how the breach took place, how long the attacker was inside the network and what systems were affected and what information was accessed or stolen.
25 Mar 2013

From Indicators of Compromise to Smoking Guns

0 comments Permalink Monday, March 25, 2013
In a previous post I used the intuitive visualization in Packetloop to zero in on a particular attacker that had targeted at least two systems with indicators suggesting Warez related FTP and the delivery of shellcode. The analysis at that time was interesting but hardly a smoking gun. The security analyst was presented with indicators of compromise but could not conclusively prove a breach.

Packetloop's "Threat Analysis" feature started out as a functionality story called "play by play". I wanted to be able to peer inside an attack and step through it play by play. To do this we needed to take every indicator and warning and then perform deep packet inspection on every packet from every conversation and link it into our User Interface. Packetloop's Advanced Filter experience is powerful and fast and Threat Analysis had to be able to respond in the same way. So you could zoom in and out, pan left and right through time and filter on Attacker, Victim, Attack, Port or Industry Reference (e.g. CVE). It was a pretty bold concept and initially difficult enough that we pushed the functionality back - but we didn't give up ;)

Remember this is the canonical DARPA98 data set that I am analysing here. So the attacks have that old school retro feel.

Threat Analysis

In the previous post the the source of attack was located due to a large number of New Attacks in a very small period of time (12 new attacks in 1 minute). These attacks were related to information discovery (Finger and RPC Mapping) against On closer inspection there was suspected FTP warez activity triggering a number of indicators. Filtering by as the source and then zooming out to a 3 month time window we were able to view the entire attack timeline and find a second host that was also the destination of attacks (x86 NOOP Shellcode). Despite these indicators it was difficult to conclusively prove and analyse the breach.

The attack timeline shows Finger and RPC requests being used by the Attacker to enumerate the target

An open FTP server is used to store and access Warez and Tools including the Linux Root Kit.

A vulnerability in BIND is exploited to gain root access
Threat analysis enables you to move from indicators and warnings to find the proverbial smoking gun. The initial series of attacks are mostly information discovery with finger attempts and rpcmap's to enumerate users and interfaces. The second set of attacks is linked to FTP and Warez and this is where Threat Analysis really shows it's power. If we focus on those purple bars in the centre of the main visualization, and we switch to the Analysis view in the Data Panel, we can immediately see exactly what this attacker is doing.

The attacker logs in to the ftp server as 'ftp' and idents and the changes into the "caliberX" and then the "Win98.Final-PWA" directory. At a glance we have gone from thinking that this might be suspicious activity on our network to knowing that it is. Scrolling down we can view the individual files that the attacker is accessing including the zip files and nfo files.

Later in the attack timeline the evidence becomes clearer and more damning. Looking into more FTP sessions between and we see the attacker download tools for exploiting Linux systems.

Again the attacker logs in as 'ftp' changes directory into 'lr2k-1.1' and then in the final row of output downloads 'lrootk.tgz'. This is a version of the Linux Rootkit.

The Shellcode attacks between and establish a shell that is used to initiate an X11 Window session between the attacker and the target. Using the advanced filter we can limit the search and zoom into the attack timeline. Shellcode is delivered over DNS (UDP/53) in a series of attacks at 12:33AM and then another flurry of attacks at 12:39AM.

The Shellcode targets a vulnerability in BIND 4.9 and BIND 8.0. This can be determined by highlighting the CVE in the Advanced Filter.

The timeline is important as an X11 session is established in the reverse direction ( to soon after the initial attacks with the first back channel created at 12:39AM. This is shown in the screenshot below.

Packetloop's Threat Analysis provides a full breakdown of the X11 session.

We can tell that the attacker used the DNS exploit to gain root access because they issue an 'id' command that returns 'root'.

Again we can access a detailed breakdown of the X11 sessions where the id command is executed.


This is a canonical example of an attacker performing reconnaissance and targeting, exploiting a vulnerability and establishing full root access. Packetloop allows you to find and analyse these incidents in minutes with full data fidelity. Every attack and attacker can be isolated, every packet in the attack can be stepped through and analysed.

With Packetloop's Threat Analysis there is no guesswork. The entire attack time line can be examined from months to minutes.

Sign Up for a Free account today and explore the 50GB of Public Datasets available on the Packetloop platform using Packetloop Threat Analysis.

23 Mar 2013

What's New?! - Threat Analysis with Deep Packet Inspection

0 comments Permalink Saturday, March 23, 2013


Context is King when it comes to understanding and analysing attacks and attackers. Today we are releasing the analysis feature for the Threats module. Internally we call this feature "play by play" and it does exactly that. It allows you to peer inside every attack and step through it so you can rule the attack in or out of your analysis.

What do you need to do to enable it? - nothing. We are processing all datasets on Packetloop today to enable this new functionality.

MySQL Login and a Drop Database shown in Analysis view
In the screenshot above the full context of a MySQL root login is shown. Stepping through the attack you can see the successful connection, authentication and then a "drop database" command is issued and executed successfully on the database server.

Packet Level Detail and Protocol Context

For every Attack each packet is analysed using deep packet inspection to identify and parse the protocol used in the attack. Relevant information from each layer of the TCP/IP stack is easily accessed and presented in a tree structure so you can drill down to specific information you are looking for.

If you want to know who dropped your database tables - it's right there. The specific HTTP URI used as part of an attack - it's right there.

Clicking on the attack in the Analysis view allows you to explore and find evidence and details that can aid your analysis.

How does it work?

Every packet capture you upload is passed through multiple detection engines and are analysed for attacks. At the same time we pass every packet and conversation through deep packet inspection - for every attack we record the specific protocol information related to the attack.

Rule in or Rule Instantly

The analysis information combined with Packetloop's Advanced Search allows you to access the context you need incredibly fast. Click on a Country, City, IP or Attack type and you can immediately filter all analysis to that data type. Using your scroll wheel or mouse pad you can zoom in and out from years to minutes or pan left and right to go forward and back in time. 

No detection system is perfect and context is king for analysts. Allowing you to inspect any attack and be able to rule it in or out of your analysis almost instantly saves you precious time. Time that can be spent finding other complex attacks.


This feature is now available for all new uploads and any packet captures stored in Packetloop. We have more functionality to add to it. As always if you have any feedback let us know.

3 Mar 2013

What's New?! - Amazon S3 Bucket Processing + More

0 comments Permalink Sunday, March 03, 2013
Our current focus is the Cloud but that won't always be our only delivery model. Packetloop will be available on premise but we have a lot to deliver and the Cloud allows us to demonstrate what the product is capable of - fast and iteratively.

In the last week we added some cool new features and we will continue to add cool new things every week. Obviously the ones that Customers need go in first. Here is a summary of the 1.0 releases.

Amazon S3 Bucket Processing

Our Customers are pretty equally split between people who are producing captures from their corporate network and performing analysis in Packetloop and those that have applications that operate 100% in the Cloud. For Customers already in the Cloud copying full packet captures down to their computer and then re-uploading to Packetloop is classic double handling. It would be much easier if they could just push their full packet captures to an S3 bucket and give Packetloop the ability to process their bucket.

We built this functionality directly into the application. All the Customer has to do is make sure they have granted the ability for Packetloop to access their bucket via a bucket policy. The following screenshots show how simple it is to process large captures from an S3 bucket.

Select Upload Files and then S3 Bucket
Select a Capture Point and specify your Bucket Name
Select the files you want to process and submit
The packet captures are copied directly from bucket to bucket avoiding the requirement to download and re-upload the data. It's simple and fast! After the files are copied they are processed as normal by Packetloop.

In the future we will add scheduling capabilities so we can scan the bucket periodically, find new capture files and seamlessly process them (stay tuned!).

Command Line Upload

Some customers roll their full packet captures over based on time (e.g. 1 hour) or based on size (e.g. 100Mb) and wanted an easy way of getting these captures into Packetloop. We brought forward enough of our API functionality to allow you to login, list/create capture points and upload files. Here's an example of how to do this using cURL. So now when Customers roll their captures they can fire the script and upload to Packetloop and the next time they login the data will have been processed.

Compression and Archive Support

Packet captures compress extremely well to often 1/2 or 2/3's of their original size. Packetloop shipped with support for xzip, bzip2, gzip and tar archives but strangely enough no zip support ;( This was addressed in a recent release.

19 Feb 2013

How to create a Full Packet Capture

0 comments Permalink Tuesday, February 19, 2013

This article was written by Tyson Garrett, COO of Packetloop in our Support Forums. I thought it was to good to just live in support, so here it is.


Once you’ve decided that you’d like to start doing full packet capture, your may well ask how? There are two basic steps in performing full packet captures.
  1. Take a copy of the Network Data
  2. Storing the data as a Full Packet Capture
If you know how to perform these two steps, then we expect to see you uploading shortly! If you don't then read on.

Taking a copy of the Network Data

Well depending on your environment you are going to have a few options:
  1. Use a port mirror (aka span port) configuration on your Internet switch
  2. Do a traffic export from your router (not recommended)
  3. Use a dedicated tapping device
If you want to get started right now, the easiest option with least potential impact will be the port mirror on your Internet switch located between your Internet router and your firewall (you do have a firewall don’t you?). Most modern switches can be configured to send a copy of the traffic traversing this link and send it to another port to which you can connect your capture device (covered in another blog post link here). At Packetloop the terminology we use for this setup is a port mirror. However some switch vendors may instead refer to this as a span port, network monitor, interface monitor or port monitor.
The configuration for setting up each of the switches will be slightly different based on the hardware and software version and specific Vendor. If we haven’t listed your exact model or switch below try checking either the vendors support site or the this page: http://wiki.wireshark.org/SwitchReference

Cisco Switch Port Mirror guides:

Juniper Switches Port Mirror guides:

Note that depending on your environment, when your switch is under heavy load the priority of the port mirror process may be lower than traffic forwarding priority, this may mean the switch will not mirror all the traffic to your capture device from time to time.
In situations where this is an issue for your network, or for where you would like a more dedicated solution we recommend a network tap.

Network Taps

Network taps are purpose built devices that will mirror all traffic passing between two devices such as your firewall and internet router. This is achieved by the connecting your internet router into the first port of the network tap and your firewall into the second port. The third port of the network tap is where you then attach your capture device. Most network taps operate in a failsafe method whereby if the network tap loses power it will stop mirroring the traffic to your capture device but will still pass the traffic between your firewall and internet router. 
Other models of network taps allow your to scale this out to allow you to perform this across multiple segments. eg. You have 10x100Mb connections that are mirrored to 1x1Gb capture device.
Some vendors that sell these network taps are listed below. Note that we are not endorsing any of these products. If you have used alternative taps with success please let us know and we will add it to the list.

Netoptics: http://www.netoptics.com/products/network-taps

Network Critical: http://www.networkcritical.com/

Gigamon: http://www.gigamon.com/g-tap-a-series-always-on-network-taps

Storing the Data as a Full Packet Capture

Now that you've configured your network to send a copy of your traffic down a port, the next decision you need to make is what do I use to actually capture this traffic. As per the port mirror options there are multiple solutions to performing this within your environment such as:
  1. Use an existing computer (laptop/desktop/server);
  2. Use a dedicated capture appliance.
If you want to start capturing right now, using an existing piece of equipment is most likely going to be your only option. Whilst the dedicated capture appliance is most likely the more robust method, it will incur additional cost and unless you have one lying about you won't be able to start capturing date until you receive and configure it.
 To get going with the first option you can use a device running Windows, UNIX or Mac OS X. If you are using UNIX or OS X you will just need the tcpdump application. If you are running Windows we recommend you get Wireshark and as part of the Wireshark installation, install WinPCAP.
Now before you start running these applications you need to determine the following:

How am I going to ensure the timestamps in my capture files are accurate?

As your capture device is going to have at least one interface dedicated to capturing traffic, you are going to need a management interface on the device to allow you to sync with either an internal or external time source. To assist with correlation it's best if you use the same time source that your internal systems use.

Where am I going to save the data to?

Depending on the network you are capturing traffic from your daily captures may be anywhere from less then 1Gb per day to more than 1Tb per day. Coupled with the rollover question discussed below you'll need to determine how you can ensure you don't run out of disk space on the local device. Options such as moving the data to a external USB device or a network share as a scheduled task/cron job will ensure you can capture local and then store remote. Once you've captured the data don't panic about the size of the files, you will be able to compress them for archival purposes. Depending on the exact traffic mix you should expect the compresses file size to be b/w 20%-30% of the original size. We recommend using and 7zip for Windows or lama for Linux (on CentOS / Fedora / RHEL / Redhat Linux known as xz)

How often do I want to rollover to a new capture file?

To make your capture files easy to move around we recommend you roll them over once they hit 1Gb of traffic. This also allows us to process them in parallel, getting you results back much faster.
To do do this with tcpdump it's fairly easy using the -C parameter. Where the number after the -C is in millions of bytes. For example to run a tcpdump on eth1, saving the full packet size, a capture filename of Internet-Monitor (in the /var/captures directory) preceded by the date and time and a rollover of once every 1Gb the command would be:
sudo nohup tcpdump -i eth1 -s 65535 -w /var/captures/`date +"%Y%m%d-%T"`-Internet-Monitor.pcap -C 100 &
This will create a series of files as follows:
and so on.
We have run this with nohup to avoid the scenario where you've initiated the capture via a remote logon and want to ensure the capture continues after you logoff or get disconnected. To terminate the capture you'll need to manually kill the process via running:
pkill -9 tcpdump
Alternatively if you wanted to roll the file over every 1 hour or every 1Gb (whichever comes first) we would run the following command:
sudo nohup tcpdump -i eth1 -s 65535 -w /var/captures/%Y-%m-%d-%H:%M:%S-Internet-Monitor.pcap -C 100 -G 3600 &
This will create a series of files as follows:
20130208-21:08:53-Internet-Monitor.pcap1 <- Hourly file went over 1Gb, so tcpdump rolled to a new file
and so on.
As mentioned previously, Windows based systems will require Wireshark with WinPcap. Whilst Wireshark has a user interface to keep things simple we are going to stick with the dumpcap command that is much like tcpdump. It will most likely not be in your path so you'll need to cd into the directory you installed Wireshark into. Due to the way Windows works with network interface names you'll most likely need to run dumpcap with the -D option to determine which interface you wish to capture on.
C:\Program Files\Wireshark>dumpcap -D
1. \Device\NPF_{1EDF5C06-F6BD-41C7-9D91-9257429754E4} (E1G607 Intel(R) PRO/1000 MT Network Connection)
2. \Device\NPF_{08A648A7-21E2-4C45-A54A-E7BEFC3943AD} (E1G6015 Intel(R) PRO/1000 MT Network Connection)
3. \Device\NPF_{883330D9-0FA9-42FA-A74B-19A40D8C74CC} (E1G6016 Intel(R) PRO/1000 MT Network Connection)
If you still aren't sure which interface is which run Wireshark and examine the interface details there (it gives the IP address and some other additional information) the names match up with what is supplied by dumpcap. Presuming device number 3 (3. \Device\NPF_{883330D9-0FA9-42FA-A74B-19A40D8C74CC} (E1G6016 Intel(R) PRO/1000 MT Network Connection)) from the above list is the interface we want to capture on we can just use the -i 3 value. So to rollover capture files with dumpcap once you get over 1Gb we'd run the following command:
dumpcap -P -i 3 -b filesize:1048576 -w c:\captures\Internet-Monitor.pcap
Alternatively if you wanted to roll the file over every 1 hour or every 1Gb (whichever comes first) we would run the following command:
dumpcap -P -i 3 -b filesize:1048576 -b duration:3600 -w c:\captures\Internet-Monitor.pcap
13 Feb 2013

Packetloop Commercial Release - you can upload!

0 comments Permalink Wednesday, February 13, 2013

Today we are super excited to announce the commercial release of Packetloop! This means you can now upload and analyze your own packet captures, finally unlocking the power of Big Data Security Analytics in the Cloud. 

We had an incredible response to our Beta program and the feedback has proved invaluable in finalising the product you can use today. We kept a few cool things back from the Beta especially for the commercial release, including a beautiful new User Interface (not that we didn't like the old one!).

You will now be able to store months or years of data in Packetloop, and constantly re-evaluate it using the most up to date threat intelligence available. Most importantly if you do uncover a previously undetected attack in your data, Packetloop gives you the ability to rewind the data and fully understand exactly what the attacker has done since they first attacked your network.

Best of all there is nothing to install. You simply grab packet captures from points around your network where you need a better understanding of the threat activity, and upload it to Packetloop for processing. No large capital outlay, no talking to sales people, no complex integration. Signup to explore and understand the threats in your network.

Over the past 18 months, we have solved a lot of complex issues on how to store and search the data, and the end result is the ability to seamlessly zoom in from a view of years of data to a view with just a few minutes of data. The ability to present the data from different perspectives such as the source, the destination or even the attack itself, and then filter it rapidly to isolate a single attacker or attack from billions of packets.  These are the features that allow Packetloop to provide you with clear intelligence about your network.

Because Packetloop is delivered in the Cloud, we will be able to deliver new and exciting features and updates to you constantly. We have an exciting roadmap of new features and modules to share with you in the near future, and these will only serve to extract more value and intelligence from the data you have already uploaded to Packetloop.

I could carry on here for hours about features and benefits, but we are keen for Packetloop to speak for itself. We need to thank a lot of people for all the support and encouragement we have received from within the information security community globally, so thank you! Finally I would like to acknowledge the herculean efforts of the Packetloop team in creating and delivering such a wonderful product.

We hope you enjoy using Packetloop and we look forward to working closely with you to better understand the security of your network.

What's New in Packetloop 1.0.1!

0 comments Permalink Wednesday, February 13, 2013
This release is very special to us as it's our commercial release. For us it's the end of a tough yet enjoyable development process. We shipped, soooo happy!

It is important to note that the platform is sparkling new and you will need to sign up for a Free or Paid Account before accessing over 50GB of public datasets or uploading your own packet captures. Any Early Access or Beta accounts have been retired.

In this release we shipped the following features;
  • Redesigned User Interface and Experience
  • Customers can upload data via Web Upload and Send a Disk (up to 16TB!)
  • Live processing for smaller uploads.
  • The ability to delete packet captures after they are processed.

User Interface and Experience

Our first commercial version had to pop! We have gone through three user interface designs whilst in development - it's important to us and we hope you like the design. 

Packetloop "Metro" User Interface - more analytics, less wood panels.

The old user interface was starting to resembled a wood panelled station wagon and we wanted a clean analytics product look. So out went the bezels, the gradients and panels and in came a clean design that we call "metro" internally. It wasn't inspired at all by Microsoft or Windows 8 though ;)

We opened up a lot of space in the header, removing space taken up by features that are yet to ship and placing all functions in a pivot on the left hand side.

Feature Pivot
To provide even more space when you are working in the main visualization or the data panels when you scroll down the menu minimizes to give you more space to operate. It's a subtle and smart transition allowing data panels to be viewed while still rendering the entire main visualization.

Header minimised - more data panel with main visualization.

In the main visualization area we added a Zoom to Fit icon and what we call a Follow Annotation. Zoom to Fit used to be in the time period select box but you end up using it so much it deserved it's own button. Also the Follow Annotation tracks with your mouse pointer providing a clear understanding of key threat metrics. It's designed to be unobtrusive - not taking away from the main visualization but complementing it.

Zoom to Fit and Follow Annotation

Quick Search and Advanced Search are now accessible via icons in the navigation menu. Inspired by vim you can also use hot keys to access them (try forward slash for quick search). 

Quick Search

Just press forward slash and then describe what you are looking for, and make a selection with your mouse or simply press enter.

Advanced Search allows you to type in things you are looking for or click through a linked list. Think of it like a network graph - if you click on a node like Source IP address then all other criteria is filtered based on that node. This allows you to search and filter event data incredibly fast.

Advanced Search

The legend options and guides are now accessible by selection the plus (+) icon in the Legend area. Guides are a great way to augment your analysis and bring outliers to the surface much faster. In the example below I have enabled the guide for "Looped Attacks".

Legend Options

The "Looped Attacks" guide.

Lastly Packetloop is now supported in more browsers - Internet Explorer 9 and 10, Firefox, Safari and Chrome.

Web Upload and Send a Disk

In this release we enabled the ability to upload full packet captures via Web Upload and Send a Disk methods. For Web Upload click on the "Upload Files" button in the top right, choose or create a Capture Point and then Upload.

Web Upload - Drag and Drop or Click to Select.

Send a Disk upload allows you to capture a massive amount of full packet captures and ship us the disk. You can encrypt the captures with a passphrase and supply the passphrase to us when we process them. We are initially trialling this with US customers and shipping is free. In the next release we will enable it for all customers. We support USB, eSATA or 2.5/3.5 inch disks up to 16TB in size. If you encrypt and compress these archives that is around 32TB of full packet captures! Note that Send a Disk functionality is handled via raising a ticket with support.packetloop.com but will become fully automated in the next release.

All upload methods support gzip, xzip (lzma), and bzip2 compression and also tar archives. So if you want to tar up an entire directory, compress it and upload via Web Upload or Send a Disk you can.

Live Processing

We are designed and built for Big Data but we haven't forgotten the little guy. We envisage that a lot of customers will upload relatively small captures to test the service before they commit large amounts of data. We will process small captures live and not even engage the Big Data back end processing making it as fast to process 100Mb as it is to process 1TB.

Packet Capture Deletion

In this release customers are able to delete packet captures after we process them. The decision is totally up to you and will be integrated into all upload methods. Once the packet captures are processed they are only required for looping (searching for zero days) and to make new features instantly accessible when we ship them. All the data extracted from the packet capture is inserted into our NoSQL database to be supplied to the user interface.

After the packet captures have been processed customers can click on Settings -> Usage and then delete the original packet capture or the data extracted from the packet capture.

Thanks again!

To all the people that helped us during Early Access and Private Beta. Your interest, passion, excitement and suggestions have been invaluable to us. We are at the end of the line if you want to reach out to us on Twitter, Google+, Facebook or Support.