Project

General

Profile

Bug #3135

ttcn3-bts-test: timeout on missing osmo-bsc

Added by neels over 1 year ago. Updated 3 months ago.

Status:
Rejected
Priority:
Normal
Assignee:
Category:
-
Target version:
-
Start date:
04/04/2018
Due date:
% Done:

0%

Spec Reference:
Tags:

Description

Recently, we managed to break our osmo-bsc-main docker container by means of a broken config file.
The result was that the BTS_Test got stuck upon starting the first test, never timing out and never reporting error.
Make sure that the test suite notices that something is wrong within a sensible timeout when the osmo-bsc.cfg is broken.

History

#1 Updated by laforge over 1 year ago

  • Project changed from Cellular Network Infrastructure to OsmoBTS
  • Assignee set to sysmocom

#2 Updated by laforge over 1 year ago

  • Tags set to TTCN3

#3 Updated by laforge over 1 year ago

  • Assignee changed from sysmocom to stsp

#4 Updated by stsp about 1 year ago

This is likely a duplicate of issue #3149.

#5 Updated by stsp about 1 year ago

With current osmo-ttcn3-hacks master, the BTS tests do not hang forever if osmo-bsc is missing:

MC2> MTC@fintan: Test case TC_chan_act_stress started.
MTC@fintan: Test case TC_chan_act_stress finished. Verdict: fail reason: Timeout waiting for ASP_IPA_EVENT_UP
MTC@fintan: Test case TC_chan_act_react started.
MTC@fintan: Test case TC_chan_act_react finished. Verdict: fail reason: Timeout waiting for ASP_IPA_EVENT_UP
MTC@fintan: Test case TC_chan_deact_not_active started.

Could this problem be specific to the docker setup? Can it still be provoked there?

#6 Updated by stsp about 1 year ago

I don't have ready access to a Docker setup which I could use to quickly check if this is still a problem.
Could someone else check this on my behalf or should I invest time in building my own dockerized setup and try to reproduce?

#7 Updated by stsp about 1 year ago

  • Status changed from New to In Progress

#8 Updated by laforge about 1 year ago

On Tue, Aug 07, 2018 at 10:20:38AM +0000, stsp [REDMINE] wrote:

I don't have ready access to a Docker setup which I could use to quickly check if this is still a problem.
Could someone else check this on my behalf or should I invest time in building my own dockerized setup and try to reproduce?

The setup should be rather trivial and straight-forward. You will have to build a hand full of docker images
by thei respective "make" command in docker-playground, and then use the ./jenkins.sh to execute the test
suite. I think it can be expected from everyone in the team to do this.

#9 Updated by fixeria about 1 year ago

  • Status changed from In Progress to Feedback

I just upgraded to the latest source code, and tested two things (separately):

  • IPA unit-id mismatch: 1234 vs 1200,
  • OML remote-ip mismatch 172.18.9.11 vs 172.18.9.100.

In both cases all tests (excluding TC_lapdm_selftest) have been failing, no hangs were observed...

#10 Updated by laforge about 1 year ago

On Tue, Aug 07, 2018 at 12:13:42PM +0000, fixeria [REDMINE] wrote:

I just upgraded to the latest source code, and tested two things (separately):

- IPA unit-id mismatch: 1234 vs 1200,
- OML remote-ip mismatch 172.18.9.11 vs 172.18.9.100.

I'm not sure what you mean by 'mismatch'? We are executing the tests from the docker-playground.git
repositories automatically by jenkins evey night, and the config files contained in docker-playground.git
should definitely work.

#11 Updated by laforge 3 months ago

  • Status changed from Feedback to Rejected

Also available in: Atom PDF

Add picture from clipboard (Maximum size: 48.8 MB)