$cat /proc/sys/net/ipv4/tcp_low_latency
0
$
tcp_low_latency - BOOLEAN
If set, the TCP stack makes decisions that prefer lower
latency as opposed to higher throughput. By default, this
option is not set meaning that higher throughput is preferred.
An example of an application where this default should be
changed would be a Beowulf compute cluster.
Default: 0
source : linux kernel source Documentation.
/*
* CONFIG_LATENCYTOP enables a kernel latency tracking infrastructure that is
* used by the "latencytop" userspace tool. The latency that is tracked is not
* the 'traditional' interrupt latency (which is primarily caused by something
* else consuming CPU), but instead, it is the latency an application encounters
* because the kernel sleeps on its behalf for various reasons.
*
* This code tracks 2 levels of statistics:
* 1) System level latency
* 2) Per process latency
*
* The latency is stored in fixed sized data structures in an accumulated form;
* if the "same" latency cause is hit twice, this will be tracked as one entry
* in the data structure. Both the count, total accumulated latency and maximum
* latency are tracked in this data structure. When the fixed size structure is
* full, no new causes are tracked until the buffer is flushed by writing to
* the /proc file; the userspace tool does this on a regular basis.
* A latency cause is identified by a stringified backtrace at the point that
* the scheduler gets invoked. The userland tool will use this string to
* identify the cause of the latency in human readable form.
*
* The information is exported via /proc/latency_stats and /proc/<pid>/latency.
* These files look like this:
*
* Latency Top version : v0.1
* 70 59433 4897 i915_irq_wait drm_ioctl vfs_ioctl do_vfs_ioctl sys_ioctl
* | | | |
* | | | +----> the stringified backtrace
* | | +---------> The maximum latency for this entry in microseconds
* | +--------------> The accumulated latency for this entry (microseconds)
* +-------------------> The number of times this entry is hit
*
* (note: the average latency is the accumulated latency divided by the number
* of times)
*/
source : linux kernel source 2.6.32 kernel/latencytop.c
The Hop Protocol
The Hop protocol operates over an unreliable datagram
service such as UDP/IP. The core goal of the Hop protocol
is to provide the lowest latency and highest throughput pos-
sible when transferring packets across wide-area networks.
The key elements of the Hop protocol are:
Non-Blocking: packets are forwarded despite the loss
of packets ordered earlier.
Lazy-Selective-Retransmits: nacks are sent for speci?c
lost packets after a short delay to avoid requesting data
which was not lost but merely arrived out of order or
is sequenced after lost data.
Rate-based flow control: a rate based flow regula-
tor provides explicit support for high delay-bandwidth
networks. In addition, the rate based regulator can uti-
lize bandwidth reservations services if such exist in the
physical network.
source :
A Low Latency, Loss Tolerant Architecture and Protocol for Wide Area Group
Communication
Yair Amir, Claudiu Danilov, Jonathan Stanton
Department of Computer Science
Johns Hopkins University
3400 North Charles St.
Baltimore, MD 21218 USA
yairamir, claudiu, jonathan @cs.jhu.edu
[audio:http://www.freeinfosociety.com/media/sounds/118.mp3]