Deep Learning and Neural Networks for Multi-modal Human-Swarm Data Fusion
A human-swarm interaction generates very large real-time data streams including image, voice, EEG, human physiological data, task and swarm data, and interaction data. In our paper below, we took steps towards fusing two sources of data, images and timeseries/signal sensorial data. This project will need to systematically develop deep learning and neural network models and architectures to design general deep networks for multi-modal data fusion. The candidate will work closely with Prof. Hussein Abbass (www.husseinabbass.net) team at UNSW-Canberra and will contribute deep learning models for real-time analysis of human-swarm interaction tasks.