changeset 41:bda6b24385f7

twjit config: revisit default settings The only actual change here is bumping the high water mark setting from 2 to 3; the rest of this patch is comment changes.
author Mychaela Falconia <falcon@freecalypso.org>
date Fri, 20 Dec 2024 22:13:01 +0000
parents d73b6ec27ae6
children 334d883b96ba
files src/twjit.c
diffstat 1 files changed, 57 insertions(+), 4 deletions(-) [+]
line wrap: on
line diff
--- a/src/twjit.c	Fri Dec 20 20:07:01 2024 +0000
+++ b/src/twjit.c	Fri Dec 20 22:13:01 2024 +0000
@@ -16,10 +16,63 @@
 void twrtp_jibuf_init_defaults(struct twrtp_jibuf_config *config)
 {
 	memset(config, 0, sizeof(struct twrtp_jibuf_config));
-	config->bd_start = 2;	/* smallest allowed */
-	config->bd_hiwat = 3;	/* Nstart+1 is practically-useful minimum */
-	config->thinning_int = 17;	/* prime number, usually 340 ms */
-	config->max_future_sec = 10;	/* 10 s is a long time for voice */
+
+	/* While the theoretical minimum starting fill level is 1, the
+	 * practically useful minimum (achieving lowest latency, but not
+	 * incurring underruns in normal healthy operation) is 2 for typical
+	 * network configurations that combine elements with "perfect" 20 ms
+	 * timing (T1/E1 interfaces, external IP-PSTN links, software
+	 * transcoders timed by system clock etc) and GSM-to-IP OsmoBTS
+	 * whose 20 ms timing contains the small inherent jitter of TDMA. */
+	config->bd_start = 2;
+
+	/* The high water mark setting determines when the standing queue
+	 * thinning mechanism kicks in.  A standing queue that is longer
+	 * than the starting fill level will occur when the flow starts
+	 * during a network latency spike, but then the network latency
+	 * goes down.  If this setting is too high, deep standing queues
+	 * will persist, adding needless latency to speech or CSD.
+	 * If this setting is too low, the thinning mechanism will be
+	 * too invasive, needlessly and perhaps frequently deleting a quantum
+	 * of speech or data from the stream and incurring a phase shift.
+	 * Starting fill level plus 2 seems like a good default. */
+	config->bd_hiwat = 4;
+
+	/* When the standing queue thinning mechanism does kick in,
+	 * it drops every Nth packet, where N is the thinning interval.
+	 * Given that this mechanism forcibly deletes a quantum of speech
+	 * or data from the stream, these induced disruptions should be
+	 * spaced out, and the managing operator should also keep in mind
+	 * that the incurred phase shift may be a problem for some
+	 * applications, particularly CSD.  Our current default is
+	 * a prime number, reducing the probability that the thinning
+	 * mechanism will interfere badly with intrinsic features of the
+	 * stream being thinned.  17 quantum units at 20 ms per quantum
+	 * is 340 ms, which should be sufficiently long spacing to make
+	 * speech quantum deletions tolerable. */
+	config->thinning_int = 17;
+
+	/* With RTP timestamps being 32 bits and with the usual RTP
+	 * clock rate of 8000 timestamp units per second, a packet may
+	 * arrive that claims to be as far as 3 days into the future.
+	 * Such aberrant RTP packets are jocularly referred to as
+	 * time travelers.  Assuming that actual time travel either
+	 * does not exist at all or at least does not happen in the
+	 * present context, we reason that when such "time traveler" RTP
+	 * packets do arrive, we must be dealing with the effect of a
+	 * software bug or misdesign or misconfiguration in whatever
+	 * foreign network element is sending us RTP.  In any case,
+	 * irrespective of the cause, we must be prepared for the
+	 * possibility of seeming "time travel" in the incoming RTP stream.
+	 * We implement an arbitrary threshold: if the received RTP ts
+	 * is too far into the future, we treat that packet as the
+	 * beginning of a new stream, same as SSRC change or non-quantum
+	 * ts increment.  This threshold has 1 s granularity, which is
+	 * sufficient for its intended purpose of catching gross errors.
+	 * The minimum setting of this threshold is 1 s, but let's
+	 * default to 10 s, being generous to networks with really bad
+	 * latency. */
+	config->max_future_sec = 10;
 }
 
 /* create and destroy functions */