Cisco IOS claims "Far End Block Errors Detected"
If I connect the ice40 E1 interface cross-over to a cisco E1 Interface card (I tried several different VIWC and VIWC2 models), it always states something about "Far End Block Errors Detected":
c2811-itp>show controllers e1 0/3/0 E1 0/3/0 is up. Applique type is Channelized E1 - balanced Far End Block Errors Detected No alarms detected. alarm-trigger is not set Version info Firmware: 20060711, FPGA: 13, spm_count = 0 Framing is CRC4, Line Code is HDB3, Clock Source is Line. CRC Threshold is 320. Reported from firmware is 320.
As can be seen, CRC4 is enabled, HDB3 is set. I believe this prevents the Cisco IOS devices to ever sending any real traffic on the lne. I can configure as many timeslots with MTP2, HDLC or FR and I never see even a single flag octet on the remote (ice40) end.
When connecting a DAHDI carad to the unmodified Cisco device, I get:
c2811-itp>show controllers e1 0/3/0 E1 0/3/0 is up. Applique type is Channelized E1 - balanced No alarms detected. alarm-trigger is not set Version info Firmware: 20060711, FPGA: 13, spm_count = 0 Framing is CRC4, Line Code is HDB3, Clock Source is Line. CRC Threshold is 320. Reported from firmware is 320.
I did some research on the web, and it seems that this error message is the result of the remote end (ICE40) reporting CRC errors in the dat it received from the Cisco side. So my guess is we might be setting some bits wrong in the TS0 bits.
As TS0 generation is in the FPGA part, I'm at a loss here trying to understand what exactly is going on :/
I downgraded osmo-e1d to
785476901c0d78960e7bcd82fcc2c5535fadd70c - July 2019, i.e. long before any of my changes to make sure it is not any of my recent osmo-e1d contributions. The behavior is identical, i.e. the "Far End Block Errors Detected" message is presented, too.
It might be related to the E-bits?
ITU-T G.704 Clause 220.127.116.11 states:
In those frames not containing the frame alignment signal (see 2.3.2), bit 1 is used to transmit the 6-bit CRC-4 multiframe alignment signal and two CRC-4 error indication bits (E).
The E-bits should be set to "0" until both basic frame and CRC-4 multiframe alignment are established (see clause 4/G.706). Thereafter, the E-bits should be used to indicate received errored sub-multiframes by setting the binary state of one E-bit from 1 to 0 for each errored sub-multiframe.
So yeah, the gateware is definitely at fault here: * I don't set the bit properly when not aligned * I inverted them ... I naively thought E=0 meant no errors :/
There is a way to work-around in the firmware. You can disable the 'automatic mode' of setting the E bits by replacing
E1_TX_CR_MODE_TS0_CRC in the init code for the TX core. It will then use bits [14:13] of the buffer descriptors word to fill the E bits instead and you can set those to 1 by using something like
e1_regs->tx.bd = e1f_ofs_to_mf(ofs) | E1_BD_CRC1 | E1_BD_CRC0; when filling the TX buffer descriptor.
It'd be good to confirm that this workaround works and then I'll get to actually fixing the gateware to make sure the automatic mode works as the spec requires.