PanoShip
Panoramic Video Generation and Object Tracking for Smart Ships via Deep Learning and HPC
This project aims to develop and optimise algorithms for multi-view video stitching and object tracking tailored for unmanned surface vehicles (USVs). The primary objectives are to enhance real-time panoramic video generation and robust object tracking in challenging maritime environments characterised by large parallax, low-resolution images, and sparse features. The motivation stems from the need for comprehensive situational awareness in USVs to ensure safe navigation and effective monitoring in complex sea conditions. The project will leverage high-performance computing (HPC) resources to test and optimise deep learning-based algorithms, including unsupervised image stitching (based on UDIS++), video stitching with stabilization (based on StabStitch), and multi-object tracking (based on YOLOv7 and ByteTrack). These algorithms require significant computational power for processing high-volume video data, training neural networks, and conducting scalability tests. Computational tasks include optimisation of unsupervised image stitching methods, performance testing of video stitching and stabilization algorithms, and training of detectors during object tracking. Results will be disseminated through progress reports and peer-reviewed publications. The project team, led by Prof. Haitong Xu, has prior experience in maritime visual perception and has successfully utilised HPC platforms for similar tasks. The outcomes are expected to advance USVs’ autonomy and contribute to safer maritime operations.