Bug #3176
osmocom debian packages are not install / upgrade tested
100%
Description
We've recently seen some packaging bugs related to library version handling / package naming in both the 'nightly' and the 'latest' feeds that could have been discovered months earlier by automatic testing.
This could e.g. be achieved by some dockefiles which would do things like- try to install all packages given feed (nightly or latest) from scratch on the respective base distro (debian 8/9, ...)
- start with a docker image that has older versions of packages installed already (like nightly from a week ago, or the previous "latest" packages) and try to upgrade all packages
Of course there's no need for Docker, I just think it could come in handy for related tests
Subtasks
Related issues
History
#2 Updated by laforge 10 months ago
- Related to Feature #1649: automatic nightly build of dpkgs for Debian GNU/Linux added
#5 Updated by neels 10 months ago
What confuses me about this is that we do use docker images that install nightly packages for the docker-playground ttcn3 tests.
On my machine, I rebuild those images many times a week, and I have so far not seen a single failure to install nightly packages via apt.
What am I doing "wrong" to be missing the packaging failures that users reported?
Aren't our ttcn3 jobs rebuilding the "osmo-foo-master" docker images?
They should see the nightly package updates and rebuild every day, right?
And if they fail, hopefully the jenkins jobs result in failure?
#6 Updated by zecke 10 months ago
I assume docker doesn't do upgrades but installs the packages from a clean debian state? The easiest to reproduce would be to put old packages into a docker container and then always upgrade from this old version to a new one (and generate new versioned containers every n-weeks.. to have a kind of rolling upgrade test...)
But doing "full" upgrade tests is complex one would need to upgrade from any combination of base packages to any combination of upgrades. That feels like being 100 or more testcases (sure can be automated but complexity in a test is not nice. E.g. ttcn3 is appealing because it is real programming but that comes at the cost of needing a test for your test). What might be better and easier is to enforce certain packaging standards.
A libfoo.so.3.1.0 should always be in a package called libfoo3? A binary should be in a NAME package.
#7 Updated by laforge 10 months ago
On Tue, Apr 17, 2018 at 12:34:42PM +0000, neels [REDMINE] wrote:
What confuses me about this is that we do use docker images that install nightly packages for the docker-playground ttcn3 tests.
good point.
On my machine, I rebuild those images many times a week, and I have so far not seen a single failure to install nightly packages via apt.
same here. I've never seen any problem whatsoever.
maybe:What am I doing "wrong" to be missing the packaging failures that users reported?
- we're typically only installing a single program + its library dependencies,
i.e. not testing to install all programs at once - we're not installing osmo-nitb, as we don't do ttcn3-testing for it
- we don't test "latest"
Aren't our ttcn3 jobs rebuilding the "osmo-foo-master" docker images?
They should see the nightly package updates and rebuild every day, right?
And if they fail, hopefully the jenkins jobs result in failure?
yes.
#11 Updated by osmith 5 months ago
- % Done changed from 0 to 50
Getting a list of all packages in the repository and installing them in a plain debian:stretch container is working:
https://gerrit.osmocom.org/#/c/docker-playground/+/10862/
However, some packages are conflicting with others. So further development is blocked until that is resolved. I will open new issues for the conflicts.
#12 Updated by laforge 5 months ago
On Mon, Sep 10, 2018 at 12:57:37PM +0000, osmith [REDMINE] wrote:
However, some packages are conflicting with others. So further development is blocked until that is resolved. I will open new issues for the conflicts.
Looking forward to that.
TODO:
minor stylistic note: It makes sense to add thse as checkbox items to this ticket, rather than
in the body/description.
#14 Updated by osmith 5 months ago
- Checklist item get conflicts in the binary repo resolved added
- Checklist item run --version on all osmocom binaries in the script added
- Checklist item code review added
- Checklist item merge to master added
- Checklist item create Jenkins CI job that runs the script (once for latest, once for nightly) added
It makes sense to add thse as checkbox items to this ticket, rather than
in the body/description.
Thanks for the hint, done!
#15 Updated by neels 5 months ago
About the conflicts between openbsc.git and the newer gits, if you can't find the origin of these packages automatically:
It would be ok to keep a manual list of packages to ignore. We aren't really interested in testing the openbsc.git derived packages.
So you can still discover packages automagically, and skip all those we know to be derived from openbsc.git.
(Once that works, we can still consider checking the old packages in a separate step.
But we don't want to spend time on openbsc.git anymore, and those aren't likely to change anyway,
so that's very very optional.)
#20 Updated by osmith 5 months ago
- Related to Feature #3555: debian-repo-install-test: check if binaries report UNKNOWN as version string added
#31 Updated by laforge 5 months ago
On Thu, Sep 27, 2018 at 01:24:09PM +0000, osmith [REDMINE] wrote:
I had changed your permissions, can you please try again?
It is not working.
not for this particular ticket, which is quite special as it has subtasks and
lots of special rules apply. In fact, not even I with admin privileges can
change that ticket status.
One thing that made me puzzled that all of the subtasks were closed at 0%
completion, which is very odd. I changed it to 100% on all subtasks, but
without any success.
#32 Updated by laforge 5 months ago
osmith wrote:
I had changed your permissions, can you please try again?
It is not working.
The problem is that the ticket is "Blocked by #3542". So you first need to close that one, and then you can close this.
Initially, your permissions were wrong so you couldn't close #3542 or this one. I guess after I changed permissions you only tried this ticket without checking the one blocking this issue?
#34 Updated by osmith 5 months ago
- Status changed from In Progress to Resolved
Initially, your permissions were wrong so you couldn't close #3542 or this one. I guess after I changed permissions you only tried this ticket without checking the one blocking this issue?
That is true, I did not try the other one. However, I tried it just now, and I still can't change anything in #3542, I can only leave a comment there.
But I did delete the "blocked by" relation, and now it allowed me to set the status to "Resolved".
#35 Updated by laforge 5 months ago
On Mon, Oct 01, 2018 at 09:25:13AM +0000, osmith [REDMINE] wrote:
That is true, I did not try the other one. However, I tried it just now, and I still can't change anything in #3542, I can only leave a comment there.
This is due to the fact that #3542 is a different osmocom sub-project.
OsmoSDR and/or its child projects rtl-sdr, gr-osmosdr, osmo-fl2k, gr-gsm
are all projects in which sysmocom is not involved, and where sysmocom
developers do not automatically get 'developer' privileges.
https://osmocom.org/projects/sdr will show you that the sysmocom
developers / employees are only listed in the 'Reporters' category.
Should sysmocom get involved in related projects, we could change that,
but I think in general it would be awkward to those projects and their
maintainers if employee status at sysmocm would automatically introduce
some role there.
I think the key problem is that this particular issue #3542 was filed in
the "Osmocom SDR" project. Sure, it affects SDR hardware or drivers,
but not the Osmocom SDR software projects, which I've listed above.
Sorry for the confusion :).
I'm moving it now.
But I did delete the "blocked by" relation, and now it allowed me to set the status to "Resolved".
great.