Project

General

Profile

Bug #4074

LAPD timers completely broken

Added by laforge over 1 year ago. Updated 3 months ago.

Status:
New
Priority:
Normal
Assignee:
Osmocom CNI Developers
Category:
libosmogsm
Target version:
-
Start date:
06/21/2019
Due date:
% Done:

0%

Spec Reference:
TS 48.006 Sec. 8.9

Description

The T200 timer must be started at the moment the related frame (e.g. SABM, I, ...) is pulled out by L1 at the bottom of LAPD.

However, looking at the code, it seems to be doing it completely wrong: It starts the timer when it receives e.g. an ESTABLISH REQUEST from the L3. There can be any amount of time between that (L3 running asynchronous of the L1 of the underlying TDM) and the actual transmission.

In the end, the current behavior means that T200 is started way too early, and hence times out way too soon.


Related issues

Related to OsmoBTS - Bug #4066: osmo-bts: sending BSC an ERROR Indication with "unsolicited UA response" in "sms:sysmo" osmo-gsm-tester testResolved06/20/2019

Related to OsmoBTS - Bug #4487: revisit fn-advance / rts-advance default settingsResolved04/07/2020

History

#1 Updated by laforge over 1 year ago

  • Related to Bug #4066: osmo-bts: sending BSC an ERROR Indication with "unsolicited UA response" in "sms:sysmo" osmo-gsm-tester test added

#2 Updated by laforge 3 months ago

  • Spec Reference set to TS 48.006 Sec. 8.9

One idea that came up recently was to not use system timers (tied to "real time") in osmo-bts, but to use frame numbers as a time base. So if a LAPDm frame is encoded into bursts of a given frame number, we can then express the "n" milliseconds as frame count and check the uplink frame numbers. Once they exceed the timeout, the timer expires. This way it doesn't really matter how much fn-advance or the like we use: The time-out would always automatically scale with that.

Still, what we would need is to start timers when L1 pulls a L2 frame, which is quite different from the current architecture.

3GPP TS 48.006 Section 8.9 contains some nice tables and explanations on what kind of requirements a "good" T200 setting must fulfill. If it is too short (for our overall processing + propagation delay), we will get plenty of useless retransmissions that reduce our throughput. And if it is too long, we equally loose thorughput as lost frames are detected too late.

#3 Updated by laforge 3 months ago

  • Related to Bug #4487: revisit fn-advance / rts-advance default settings added

#4 Updated by laforge 3 months ago

I'm wondering how we can make progress here. Ideally, I would want to make the T200 timers scale automatically as configuration parameters like fn-advance change.

What we would need to understand is the amount of processing delay between the time of
  1. Tx Delay: pulling a frame from LAPDm, encoding, interleaving it and sending (the last sample of) its last burst over the Um interface, ideally in terms of when it hits the antenna, but I guess when sending the samples to the SDR driver is good enough
  2. Rx Delay: time between receiving the (first? last? sample of) I/Q samples of the first burst of a frame and sending it into LAPDm.

The question is, how to determine those values, ideally at runtime, or at the very least once during start-up, so we can use it to configure the various timers.

The precision doesn't have to be very high, but more in the milli-second domain, as that's the basis for T200. If it simplifies things, we can also normalize it to units of frame durations (4.6ms).

#5 Updated by laforge 3 months ago

Note: I don't think we really need the separate Rx/Tx delay, but the sum would be sufficient.

Can we learn that sum of rx+tx delay (at least quantized to FN granularity) by comparing the FN received in uplink bursts with the FN we use for generating downlink data (i.e. CLOCK IND plus fn-advance)?

#6 Updated by fixeria 3 months ago

Hi Harald,

One idea that came up recently was to not use system timers (tied to "real time") in osmo-bts, but to use frame numbers as a time base.

nice idea!

Can we learn that sum of rx+tx delay (at least quantized to FN granularity) by comparing the FN received in uplink bursts with the FN we use for generating downlink data (i.e. CLOCK IND plus fn-advance)?

Yep, for sure we can. I think this is the easiest way.

Also available in: Atom PDF

Add picture from clipboard (Maximum size: 48.8 MB)