TWR nranges TDMA example cannot support more than 7 nodes

Dear Decawave community,

We are experiencing a network capacity issue when using more than 7 nodes in the TWR nranges TDMA example application.

We own a MDEK1001 Development Kit with 12 DWM1001 development units. In our setup, we are using 1 unit as a tag, 1 unit as a master node, and the remaining 10 units as slave nodes.

We have pulled the latest version of the master branch, and we have updated all 12 units with the latest firmware. Next, we have built the targets following the instructions found in the README file.

Master node configuration:

newt target create master_node
newt target set master_node app=apps/twr_nranges_tdma
newt target set master_node bsp=@mynewt-dw1000-core/hw/bsp/dwm1001
newt target amend master_node syscfg=PANMASTER_ISSUER=1
newt run master_node 0.1.0

Slave nodes configuration:

newt target create slave_node
newt target set slave_node app=apps/twr_nranges_tdma
newt target set slave_node bsp=@mynewt-dw1000-core/hw/bsp/dwm1001
newt target amend slave_node syscfg=NRANGES_ANCHOR=1
newt run slave_node 0.1.0

Tag configuration:

newt target create tag
newt target set tag app=apps/twr_nranges_tdma
newt target set tag bsp=@mynewt-dw1000-core/hw/bsp/dwm1001
newt target set tag build_profile=debug
newt target amend tag syscfg=NRNG_NNODES=16:NRNG_NFRAMES=32:NODE_START_SLOT_ID=0:NODE_END_SLOT_ID=15
newt run tag 0.1.0

Note that in the above configuration we are telling the tag to make room for 16 nodes (NRNG_NNODES=16), but in reality we only have 11 nodes (1 master and 10 slaves). So there should be more than enough room for all the nodes in the network. Likewise, we adjust the number of frames following the NRNG_NFRAMES<=NRNG_NODES * 2 rule (NRNG_NFRAMES=16 * 2=32) and start/end slot IDs to match the number of nodes (NODE_START_SLOT_ID=0 and NODE_END_SLOT_ID=15).

When we connect the first 7 nodes, the panm list command reports the following network state:

082552 #idx, addr, role, slot, p,  lease, euid,             flags,          date-added, fw-ver
082552    0, 5526,    1,    0,  ,       , 013A6102C4955526,  1000, 1970-01-01T00:00:00, 0.1.0
082552    1, 8805,    1,    1,  , 3512.2, 013A6102C4408805,  1000, 1970-01-01T00:03:07, 0.1.0
082552    2, c92c,    2,    0,  , 3514.3, 013A6102C3F4C92C,  2000, 1970-01-01T00:00:05, 0.1.0
082554    3, 5213,    1,    2,  , 3511.1, 013A6102C4405213,  1000, 1970-01-01T00:01:06, 0.1.0
082555    4,  e89,    1,    3,  , 3515.4, 013A6102C4350E89,  1000, 1970-01-01T00:01:45, 0.1.0
082556    5, 973b,    1,    4,  , 3513.2, 013A6102C4B4973B,  1000, 1970-01-01T00:02:26, 0.1.0
082557    6, 4d27,    1,    5,  , 3583.1, 013A6102C3F44D27,  1000, 1970-01-01T00:04:02, 0.1.0
082558    7, d88a,    1,    6,  , 3597.1, 013A6102C440D88A,  1000, 1970-01-01T00:05:28, 0.1.0

which looks good to us.

When we connect the 8th slave node (c72a), panm list momentarily adds it to the nodes list:

087188 #idx, addr, role, slot, p,  lease, euid,             flags,          date-added, fw-ver
087188    0, 5526,    1,    0,  ,       , 013A6102C4955526,  1000, 1970-01-01T00:00:00, 0.1.0
087189    1, 8805,    1,    1,  , 3475.9, 013A6102C4408805,  1000, 1970-01-01T00:03:07, 0.1.0
087189    2, c92c,    2,    0,  , 3478.1, 013A6102C3F4C92C,  2000, 1970-01-01T00:00:05, 0.1.0
087190    3, 5213,    1,    2,  , 3474.8, 013A6102C4405213,  1000, 1970-01-01T00:01:06, 0.1.0
087191    4,  e89,    1,    3,  , 3479.1, 013A6102C4350E89,  1000, 1970-01-01T00:01:45, 0.1.0
087193    5, 973b,    1,    4,  , 3477.0, 013A6102C4B4973B,  1000, 1970-01-01T00:02:26, 0.1.0
087194    6, 4d27,    1,    5,  , 3546.9, 013A6102C3F44D27,  1000, 1970-01-01T00:04:02, 0.1.0
087195    7, d88a,    1,    6,  , 3560.9, 013A6102C440D88A,  1000, 1970-01-01T00:05:28, 0.1.0
087196    8, c72a,    1,    7,  , 3588.8, 013A6102C4B4C72A,  1000, 1970-01-01T00:11:10, 0.1.0

But after a few seconds the network suddenly looses synchronicity, Clock Control Packets (CCP) are shown in the UART console and we end up with the following degraded network:

006441 #idx, addr, role, slot, p,  lease, euid,             flags,          date-added, fw-ver
006441    0,  e89,    1,     ,  ,       , 013A6102C4350E89,  1000, 1970-01-01T00:01:45, 0.1.0
006441    1, 4d27,    1,     ,  ,       , 013A6102C3F44D27,  1000, 1970-01-01T00:04:02, 0.1.0
006441    2, 5213,    1,     ,  ,       , 013A6102C4405213,  1000, 1970-01-01T00:01:06, 0.1.0
006443    3, 5526,    1,    0,  ,       , 013A6102C4955526,  1000, 1970-01-01T00:00:00, 0.1.0
006444    4, 8805,    1,     ,  ,       , 013A6102C4408805,  1000, 1970-01-01T00:03:07, 0.1.0
006445    5, 973b,    1,     ,  ,       , 013A6102C4B4973B,  1000, 1970-01-01T00:02:26, 0.1.0
006446    6, c72a,    1,    1,  , 3555.0, 013A6102C4B4C72A,  1000, 1970-01-01T00:11:10, 0.1.0
006448    7, c92c,    2,    0,  , 3555.0, 013A6102C3F4C92C,  2000, 1970-01-01T00:00:05, 0.1.0
006449    8, d88a,    1,     ,  ,       , 013A6102C440D88A,  1000, 1970-01-01T00:05:28, 0.1.0

As you can see, now we only have 3 operative units with an assigned slot ID: 1 tag, 1 master node and 1 slave node. The other 6 slave nodes all remain in an idle state with their LEDs blinking red.

We have observed this exact behavior consistently in all the different tests we have done so far, so it is definitely not accidental.

We would like to know why we are not able to connect more than 7 nodes with a network configuration that theoretically allows up to 16 nodes.

Is our configuration wrong? Or is there a physical capacity limit that we are not aware of?

Thank you.