New direction. This is just damn cool. These folks have written some custom drivers that exploit collisions in the original RSS (Receive-Side Scaling) load-balancing algorithm developed by Microsoft, such that the RX queues on the NIC end up getting properly 5-tuple load-balanced. This allows a monitoring tool to leverage the locality of reference and cache coherency inherent with having both sides of a given connection being steered to discrete CPU cores for analysis. By manipulating the secret key used in the cryptographic hash function employed by RSS, these researchers appear to have achieved IDS-optimized load-balancing completely in hardware using commodity network cards:
http://www.ndsl.kaist.edu/~shinae/papers/TR-symRSS.pdf
I need to see how this compares with similar work by Luca Deri and friends:
http://www.ntop.org/pf_ring/hardware-based-symmetric-flow-balancing-in-dna/
On the plus side, this technique appears to have already made it into the code for the DNA drivers, and a patch has recently been committed to enable this functionality for libpcap-based applications:
http://listgateway.unipi.it/pipermail/ntop-misc/2012-July/003037.html
Unfortunately, at present it seems that the DNA drivers can only be used by one network monitoring application at a time. None of this inherently solves my virtualization problem, but it's a big step in the right direction.
Stay tuned...
Friday, August 10, 2012
Wednesday, August 1, 2012
It's alive...
82599-based 10Gb NIC direct-mapped via PCIe SR-IOV into a KVM-paravirtualized Ubuntu 12.04. Initial test run looks very promising. Full-speed packet capture with zero copy and zero packet loss, thanks to the PF_RING DNA drivers running *inside the virtual machine*
root@randy:~# dmidecode | grep Vendor Vendor: Bochs root@randy:~# dmesg | grep KVM | sed 's/\[[^]]*\]//' Booting paravirtualized kernel on KVM KVM setup async PF for cpu 0 root@randy:~# dmesg | grep ixgbe | grep dna0 | sed 's/\[[^]]*\]//' ixgbe 0000:00:06.0: dna0: MAC: 2, PHY: 2, PBA No: 400900-000 ixgbe 0000:00:06.0: dna0: Enabled Features: RxQ: 16 TxQ: 16 FdirHash ixgbe 0000:00:06.0: dna0: Intel(R) 10 Gigabit Network Connection ixgbe 0000:00:06.0: dna0: NIC Link is Up 10 Gbps, Flow Control: RX/TX root@randy:~# dmesg | grep PF_RING | sed 's/\[[^]]*\]//' [PF_RING] Welcome to PF_RING 5.4.5 ($Revision: 5614$) [PF_RING] registered /proc/net/pf_ring/ [PF_RING] Min # ring slots 4096 [PF_RING] Slot version 14 [PF_RING] Capture TX Yes [RX+TX] [PF_RING] Transparent Mode 0 [PF_RING] IP Defragment No [PF_RING] Initialized correctly root@randy:~# tcpdump -i dna0 -s0 -w /dev/null tcpdump: listening on dna0, link-type EN10MB (Ethernet), capture size 8192 bytes 750380 packets captured 750380 packets received by filter 0 packets dropped by kernel
Subscribe to:
Posts (Atom)