Project

General

Profile

Actions

Bug #4074

open

LAPD timers completely broken

Added by laforge over 3 years ago. Updated 1 day ago.

Status:
Stalled
Priority:
Normal
Assignee:
Category:
libosmogsm
Target version:
-
Start date:
06/21/2019
Due date:
% Done:

10%

Spec Reference:
TS 48.006 Sec. 8.9

Description

The T200 timer must be started at the moment the related frame (e.g. SABM, I, ...) is pulled out by L1 at the bottom of LAPD.

However, looking at the code, it seems to be doing it completely wrong: It starts the timer when it receives e.g. an ESTABLISH REQUEST from the L3. There can be any amount of time between that (L3 running asynchronous of the L1 of the underlying TDM) and the actual transmission.

In the end, the current behavior means that T200 is started way too early, and hence times out way too soon.


Related issues

Related to OsmoBTS - Bug #4066: osmo-bts: sending BSC an ERROR Indication with "unsolicited UA response" in "sms:sysmo" osmo-gsm-tester testResolvedHoernchen06/20/2019

Actions
Related to OsmoBTS - Bug #4487: revisit fn-advance / rts-advance default settingsResolvedpespin04/07/2020

Actions
Actions #1

Updated by laforge over 3 years ago

  • Related to Bug #4066: osmo-bts: sending BSC an ERROR Indication with "unsolicited UA response" in "sms:sysmo" osmo-gsm-tester test added
Actions #2

Updated by laforge about 2 years ago

  • Spec Reference set to TS 48.006 Sec. 8.9

One idea that came up recently was to not use system timers (tied to "real time") in osmo-bts, but to use frame numbers as a time base. So if a LAPDm frame is encoded into bursts of a given frame number, we can then express the "n" milliseconds as frame count and check the uplink frame numbers. Once they exceed the timeout, the timer expires. This way it doesn't really matter how much fn-advance or the like we use: The time-out would always automatically scale with that.

Still, what we would need is to start timers when L1 pulls a L2 frame, which is quite different from the current architecture.

3GPP TS 48.006 Section 8.9 contains some nice tables and explanations on what kind of requirements a "good" T200 setting must fulfill. If it is too short (for our overall processing + propagation delay), we will get plenty of useless retransmissions that reduce our throughput. And if it is too long, we equally loose thorughput as lost frames are detected too late.

Actions #3

Updated by laforge about 2 years ago

  • Related to Bug #4487: revisit fn-advance / rts-advance default settings added
Actions #4

Updated by laforge about 2 years ago

I'm wondering how we can make progress here. Ideally, I would want to make the T200 timers scale automatically as configuration parameters like fn-advance change.

What we would need to understand is the amount of processing delay between the time of
  1. Tx Delay: pulling a frame from LAPDm, encoding, interleaving it and sending (the last sample of) its last burst over the Um interface, ideally in terms of when it hits the antenna, but I guess when sending the samples to the SDR driver is good enough
  2. Rx Delay: time between receiving the (first? last? sample of) I/Q samples of the first burst of a frame and sending it into LAPDm.

The question is, how to determine those values, ideally at runtime, or at the very least once during start-up, so we can use it to configure the various timers.

The precision doesn't have to be very high, but more in the milli-second domain, as that's the basis for T200. If it simplifies things, we can also normalize it to units of frame durations (4.6ms).

Actions #5

Updated by laforge about 2 years ago

Note: I don't think we really need the separate Rx/Tx delay, but the sum would be sufficient.

Can we learn that sum of rx+tx delay (at least quantized to FN granularity) by comparing the FN received in uplink bursts with the FN we use for generating downlink data (i.e. CLOCK IND plus fn-advance)?

Actions #6

Updated by fixeria about 2 years ago

Hi Harald,

One idea that came up recently was to not use system timers (tied to "real time") in osmo-bts, but to use frame numbers as a time base.

nice idea!

Can we learn that sum of rx+tx delay (at least quantized to FN granularity) by comparing the FN received in uplink bursts with the FN we use for generating downlink data (i.e. CLOCK IND plus fn-advance)?

Yep, for sure we can. I think this is the easiest way.

Actions #7

Updated by laforge almost 2 years ago

  • Priority changed from Normal to High
Actions #8

Updated by laforge 5 months ago

  • Assignee changed from Osmocom CNI Developers to msuraev
  • Priority changed from High to Normal
Actions #9

Updated by msuraev 16 days ago

  • Status changed from New to In Progress
  • % Done changed from 0 to 10

The spec reference seems wrong. The T200 is discussed in 04.06, for example: "The task set T200 shall be performed at the instant right before transmitting a frame, when the PH-READY-TO-SEND primitive is received from the physical layer."

The timer itself is defined in 3GPP TS 04.06 Sec. 5.8.1

Actions #10

Updated by msuraev 16 days ago

There's interesting idea which came up during discussion with Vadim: the l1sap_info_time_ind() in BTS gives us constantly updated FN in the Uplink direction which is stored in struct gsm_time. Although T200 is the timer which we set before BTS is sending the frame to L1 to be transmitted on Downlink, we can still re-use the Uplink counter - for us it's a raw tick value so the only thing which matter is the duration (in GSM_TDMA_FN_DURATION_uS steps) between 2 points in time.

Actions #11

Updated by laforge 16 days ago

44.006 if it was 04.06

Actions #12

Updated by laforge 16 days ago

Or 24.006

Actions #13

Updated by msuraev 1 day ago

  • Status changed from In Progress to Stalled
Actions

Also available in: Atom PDF

Add picture from clipboard (Maximum size: 48.8 MB)