Things are getting serious!
Together with the official annoucement of the J-PET Lab opening [link] we are shifting up a gear: there is plenty of FPGA related development, both in low level RTL and HLS. If you are interested, have a look at diploma projects tab or email directly at email@example.com
[Visualization by firstname.lastname@example.org]
Another step towards working system for D-JPET scanner. We have 48 front-end boards that digitize the signals and measure time, readout by 4 concentrator boards. But how synchronize them using a single fiber connection that we have for data transport and control/monitoring?
We have based our data transport infrastructure on default AXI components and Aurora links. One can share a single link between AXI Stream and Memory Mapped applications, which is perfect for our project. This allows to develop a system incredibly fast, basically using block design in Vivado.
But, using the block design, you often get what they give you. So the automatically generated Aurora links have fixed clocking scheme, with no ways to change it using the wizards. One can still take the generated sources and create a custom IP. Then it is easy to change the clocking scheme and synchronize to the clock recovered from the input data stream. And when you realize that there are few bits available in the AXI Stream data bus, you can use them to transport additional information, like synchronization pulses.
Our J-PET scanner has been one of three main projects for whole-body PET imaging during Total Body PET – From Mice to Men conference.
The contribution of our FPGA-FAIS group was in a form of a poster that highlights main ideas for innovative tomographic data processing. As no other system is doing high-level analysis of this sort on the level of FPGAs we caught a lot of interest.
You can check out the poster under this link.
It is now possible to get a real-time view at our scanner visualization in your browser (if you are inside department local network).
The most interesting thing is how the image gets there:
- Data from digitizers is received by the J-PET Controller FPGA
- Lines-Of-Response are reconstructed in real-time
- Regions-Of-Response are calculated into X,Y,Z annihilation coordinates
- Stream of reconstructed points is forwarded from programmable logic through Xillybus to the shared DDR memory
- Petalinux with Ubuntu running on integrated ARM cores accesses the memory and streams the points to your browser by a NodeJS server
It is a next step in the integration of image reconstruction process within the FPGA SoC device. Now, we have a framework on which we can implement further data processing in software.
We are thankful to Bartłomiej Flak and Dr Paweł Rajda from AGH Kraków for their contribution to the J-PET project.
We are glad to inform that our manuscript “Evaluation of single-chip, real-time tomographic data processing on FPGA SoC devices” has been accepted for publication in IEEE Transactions on Medical Imaging!
We are the first to have developed a compact and integrated solution for high-level PET processing on FPGAs.
It is a significant confirmation that out work is appreciated by the community. We have much more to come in that matter!
You can check it out here.
We are glad to announce that our idea of compact, integrated and of course FPGA-based data processing and image reconstruction has been appreciated by the organizers of the Total-Body PET 2018 Conference in Ghent, Belgium.
We will present the concepts and results of our latest developments.
Our processing rack is growing up. Today we have installed Virtex Ultrascale based data concentrators inside the rack and connected to the DAQ server. Integrated processing, real time processing and a screen will give an instant overview of the measurement.
Do not be fooled by the photo, there are wheels under the rack.