Although they are now all working on different projects, Randy and Jan keep in touch with Noé via Skype and Facebook and, sometimes, after office hours they still find time to work together on one of their favourite FROG robots.
Recently Jan and Randy fitted 2 extra lasers to cover some blind spots at the sides to FROG – this meant they had to make holes in the shell (shhh ….. nobody tell Paulo!).
The extra lasers allow protoFROG (aka the Campus Robot) to navigate better through spaces and some corners inside its new seasonal home, the DesignLab.
There, HMI Masters‘ students are using FROG as a platform for researching receptionist robots as part of the R3D3 project (part of the Dutch national COMMIT program).
At five in the afternoon the Royal Alcázar closes its gates for new visitors for the day. The shop closes and the FROG makes its last tour of the day. This is a quiet time to test the latest updates and enhancements.
The security guards make their way from the entrance through the Alcázar making sure that everyone can find the exit. And sometimes, as on this day, they stop to watch the FROG for a moment.
In the FROG project, partner UVA, that is, the University of Amsterdam, is working on people detection, tracking and body pose recognition. Dariu Gavrila is the team leader.
Maja Pantic leads the Intelligent Behaviour Understanding Group (iBUG) at Imperial College in London.
Ioannis Marras
Ioannis Marras is a researcher from the Intelligent Behaviour Understanding Group (iBUG) at Imperial College, London. His research interests include image/video processing, computer vision and pattern recognition. During his work, he has developed multiple computer vision techniques for 2D/3D face tracking and 2D/3D face recognition in the wild.
Jie Shen is a researcher from the Intelligent Behaviour Understanding Group (iBUG) at Imperial College, London. His research interests include software engineering, human-robot interaction, and affect-sensitive human-computer interfaces. During his work, he has developed the HCI^2 Framework (http://ibug.doc.ic.ac.uk/resources/hci2-framework/), which is a software integration framework for human-computer interaction systems, currently being used in multiple research projects.
Jie Shen defended his PhD thesis, titled ‘Realising Affect-Sensitive Multimodal Human-Computer Interface: Hardware and Software Infrastructure‘, in May, 2014.
You will find some of UPO’s output for FROG and other research projects here: Luis, datasets and publications.
Luis and Fernando lead and supervise UPO’s team of young researchers………..
Ignacio Pérez Hurtado de Mendoza is a computer science postdoc interested in the application of machine learning techniques to social robotics. In the FROG project supports the development of software modules related to the navigation of the robot and assists with the deployment of experiments.
Noé Pérez Higueras is in charge of the safe and robust navigation of the FROG robot in (indoor and outdoor) environments with people. Noé is trying to add social capabilities to navigation algorithms so that the robot respects human social conventions and guarantees the comfort of surrounding persons.
His PhD Thesis is on: “Robot autonomous navigation and interaction in pedestrian environments“.
Noé’s PhD is directly related to his work in the FROG project. Mainly, he is studying the different robot navigation algorithms and trying to extend them by adding social skills. To do that he employs machine learning techniques in order to learn from real people how they navigate between each other in crowded scenarios.
In 2014 Noé spent 3 weeks in the Netherlands, mapping the Gallery at the University of Twente for the opening ceremony with the Dutch king.
Noé fitted in well with the UT PhDs and will continue to work with some of them in the TERESA project. When he makes a website we will link to it here.
In the FROG project,Javier Pérez-Lara is working on improving localisation algorithms, based not only in laser readings but also on appearance matching, to recover from erroneous convergences to wrong localisation or even recovering from complete lost situations.
Javier’s thesis topic is related to robust localisation for mobile robot navigation and mobile robot interaction in crowded environments. Where we try to take into account the variability of crowded environments when localising and relocalising mobile robots.
Rafael Ramón Vigo’s knowledge of transversal competences like electronics, software and hardware are all put to good use in the FROG project. Rafael provides help with the general set up of the robotic platform and assists with the deployment of experiments.
Rafael’s PhD thesis work is on how to infer from data and its statistics the insights of human motion navigation, with the idea of transferring it to the robot’s navigation stack. The basis of this approach is to use machine learning algorithms.
Rafael also grows delicious mangoes. Recently his family planted nearly 2000 young trees that will come into production in 4 or 5 years time.
In 2013, Noé, Javier and Rafael had to spend weeks at a time in Lisbon. Though often cold or very tired, they did discover some good places to eat and were given Wi-Fi access at most of them.
How about this for an emblem? It looks great on the FROG polos.
The emblem printed on the polos is actually a reversed version of the docking target. It is a set of ArUco markers that just happen to say FROG – though it probably took Fernando Caballero quite some time to find them. UPO uses this ArUco marker to align the FROG to its charger during the docking procedure.
Yesterday, while the FROG was powered-down for hardware enhancements, Randy Klaassen and Jan Kolkmeier from the University of Twente added some features to their FROG Wizard-of-Oz web interface.
They designed and implemented this web interface to control not only the output of specific AR content but also to trigger chosen steps from the State Machine.
This interface is not used for the FROG tour when the robot is running autonomously but is a splendid tool for experiments or testing as it can trigger specific states or abilities. It can also be used to keep the FROG in action in the case f really bad weather as it can be used to run just the indoor parts of the robot mission.
Another thing that has changed since are last visit is tapestry with the map of Europe. The AR overlay was achieved by defining markers (the position of a number of specific features) on a photo of the original to build a model. These markers then had to be recognised by the FROG using incoming information from the antenna’s camera so that these could used to aim the overlay. The same set of markers was defined in the overlay and these had to match up to the features in the tapestry (the target) so that the overlay would be correctly positioned in the resulting overlay projection. All of this is necessary as, due to its autonomous social navigation, the FROG may stop at a different location in the hall on each separate tour.
Too many ripples in the tapestry at any time could mean that the feature markers cannot be recognised. Of course, there are several solutions to compensate for differences in the folds in the tapestry between tours. This would take some time to implement and as we have some more pressing matters for this session a quick solution was chosen. This involves using the whole tapestry area as a target instead of specific points on the target. Not as elegant as it could be but certainly a neat solution for a proof of concept.
A collaborative project under the FP7-ICT-2011.2.1 Cognitive Systems and Robotics (a), (d) area of activity.