Publications
An Underwater Simulation Server Oriented to Cooperative Robotic Interventions: The Educational Approach
Published by IEEE
Alejandro Solis, Raul Marin, Javier Marina, Francisco J. Moreno, Mario Ávila, Marcos de la Cruz, Daniel Delgado, Jose V. Marti, Pedro J. Sanz
Jaume I University of Castellon - CIRTESU (Underwater Robotics and Technology Research Center)
Experiments that require the use of Supervised Autonomous Underwater Vehicles for Intervention (I-AUV) are not easy to be performed, specially when deployed in the sea or in scenarios where the robot might face lack of space and communication (e.g. interior of pipes). Also, there are some applications where the robots need to cooperate in a closed manner, for example when transporting and assembling big pipes. In fact, these two scenarios are being studied in the context of the H2020-ElPeacetolero and TWINBOT (TWIN roBOTs for cooperative underwater intervention mission) projects, being necessary to have a simulation tool that offer more realistic rendering and being compatible with the real robot Application Programming Interface (API).
Head and eye egocentric gesture recognition for human-robot interaction using eyewear cameras
Available in arXiv
Javier Marina-Miranda, V. Javier Traver
Jaume I University of Castellon - Institute of New Imaging Technologies
Non-verbal communication plays a particularly important role in a wide range of scenarios in Human-Robot Interaction (HRI). Accordingly, this work addresses the problem of human gesture recognition. In particular, we focus on head and eye gestures, and adopt an egocentric (first-person) perspective using eyewear cameras. We argue that this egocentric view offers a number of conceptual and technical benefits over scene- or robot-centric perspectives.
A motion-based recognition approach is proposed, which operates at two temporal granularities. Locally, frame-to-frame homographies are estimated with a convolutional neural network (CNN). The output of this CNN is input to a long short-term memory (LSTM) to capture longer-term temporal visual relationships, which are relevant to characterize gestures.
Regarding the configuration of the network architecture, one particularly interesting finding is that using the output of an internal layer of the homography CNN increases the recognition rate with respect to using the homography matrix itself. While this work focuses on action recognition, and no robot or user study has been conducted yet, the system has been de signed to meet real-time constraints. The encouraging results suggest that the proposed egocentric perspective is viable, and this proof-of-concept work provides novel and useful contributions to the exciting area of HRI.