How to make the node transmit in the assigned time slot

time slots are mentioned in the dw1000 user manual, individual slave nodes can synchronize with the beacon and transmit only in their assigned slots ,but I don’t know exactly how to control the node transmission in their assigned slots.

Hi huijunn,

Can you clarify which hardware and software you are currently using ?

“Slave nodes”, “beacon” and “slots” are purely software concepts that can be use to organise communication between multiple DW1000s. They may be mentioned in the DW1000 manual but will not be extensively described as they are not directly related with the product.

Thanks
Yves

I did slot division research based on the development board, , I know how to estimated frequency offset and get precise transmission time ,but I’m using Timer of CPU to control node transmission ,but it’s unstable.

Hi huijunn,

Did you manage to make any advances in this matter? I am also trying to use nRF’s app_timer to control node transmission but as you mentioned, it is not very stable and precise. What other approach have you tried?

Hi Xavier,

a better approach is to use delayed transmission feature in the DW1000.
All timing will be based on the DW1000 time.

E.g.

  1. Read time from the DW1000.
  2. Add some offset to the future slot time.
  3. Use delayed transmission to send the frame.
  4. Repeat it as needed.

This will give you very accurate timing which can be synchronized between the nodes. This technique is used in PANS.

Cheers,
TDK

Hi leapslabs,

Thank you for sharing that! I will explore that.

To further clarify, will processing TX/RX interrupts, reading system time and calculating/setting delayed TX/RX incur significant overhead? In my use case, I would like to make sure the time spent processing is minimal so as to minimize clock drift and making calculated distances inaccurate.

Regards,
Xavier

The processing time is not significantly more than for an immediate transmission.

The fastest solution is to transmit a reply immediately, this gives the minimum time for things like clock drift to occur. The down side is that there will be variation in the receive to transmit time delays. This also doesn’t allow you to assign time slots, since you can’t have multiple replies all sent as soon as possible without getting collisions this means it’s only suitable for one to one communication.

If you use a delayed transmit you can schedule it and so obtain a very deterministic time delay. This removes the variation in reply time and allows you to assign time slots. Due to register resolution this delay won’t always be exactly the same but the error from ideal will be known in advance so the transmitter can include the error in the data packet. The down side is that your delay must always be set to being slightly larger than your worst case processing delay and so your minimum time to send a reply is larger.

Hi AndyA,

Thank you for the prompt reply! I noted your explanation.
For my use case, it is ranging between every possible pair of devices in a network of multiple devices. Thus, the data packet is appended with additional information to make that happen. Understandably, this will cost more processing time.
My question is, is there a way for me to measure processing time between 2 points? I am thinking of just reading the system time between 2 points but I’m not sure if that is feasible/accurate way to do it.

Sorry for hijacking this thread, I can make a new thread if that is preferable.

Regards,
Xavier

If you are sending larger radio packets then the time to read/write those packets over SPI to/from the decawave chip and the radio transmit time end up being far larger than any other effects on your processing time. Which is nice because you can calculate exactly what they are for any given packet.

Personally my preferred way to measure processing time is to toggle an LED or some other insignificant IO line and then measure the time on an oscilloscope. Since something like that should only take a couple of processor cycles and doesn’t require any memory it has virtually no impact on the speed of the process you are trying to time. If that isn’t practical then internal CPU time is probably good enough as a performance measure. The decawave timestamps would be more accurate but reading them over SPI is slow and so will have far more impact on performance so it’s not worth it unless you need ns accurate times.

Noted. This has been very informative! Thanks for your time!