Another step towards working system for D-JPET scanner. We have 48 front-end boards that digitize the signals and measure time, readout by 4 concentrator boards. But how synchronize them using a single fiber connection that we have for data transport and control/monitoring?
We have based our data transport infrastructure on default AXI components and Aurora links. One can share a single link between AXI Stream and Memory Mapped applications, which is perfect for our project. This allows to develop a system incredibly fast, basically using block design in Vivado.
But, using the block design, you often get what they give you. So the automatically generated Aurora links have fixed clocking scheme, with no ways to change it using the wizards. One can still take the generated sources and create a custom IP. Then it is easy to change the clocking scheme and synchronize to the clock recovered from the input data stream. And when you realize that there are few bits available in the AXI Stream data bus, you can use them to transport additional information, like synchronization pulses.
Our processing rack is growing up. Today we have installed Virtex Ultrascale based data concentrators inside the rack and connected to the DAQ server. Integrated processing, real time processing and a screen will give an instant overview of the measurement.
Do not be fooled by the photo, there are wheels under the rack.
This Monday we had a really successful workshop on video processing on Zybo Z7 boards.
The workshop was prepared by the experts from Kamami.pl and Digilent. 20 participants have learned video processing techniques, Vivado and HLS.
We are looking forward more such workshops to come in the future!
On Monday 20th November 2017, we’ll be hosting a free of charge workshop on video processing on Zynq devices, organized by KAMAMI and Digilent.
Head out to [this] link and register!
Enhanced image reconstruction, including 3D and TOF functionalities has been successfully implemented entirely in the FPGA!
In programmable logic, we are finding LOR candidates and reconstruct the annihilation point coordinates. Then, only X, Y, Z values are being sent from the JPET Controller to the server that produces 3D canvas with the scanner visualization.
You can find a video showing it in action under [this] link.