Save event

CRNS talk series

The Centre for Robotics and Neural Systems hosts the CRNS talks series with prestigious and passionate invited speakers from both research communities and industry. 

The seminars are held online, open to the general public and free to attend.  For further details, please contact Dr Amir Aly; the coordinator of the CRNS talk series.

Confirmed speakers

  • Dr Alessandra Sciutti - Italian Institute of Technology (IIT) - Italy

  • Dr Rachid Alami - National Center of Scientific Research (LAAS-CNRS) - France

  • Dr Séverin Lemaignan - PAL robotics - Spain

  • Professor Mehul Bhatt - Örebro University - Sweden

  • Dr Michael Spranger - SONY AI - Japan

  • Professor Jochen Triesch - Frankfurt Institute for Advanced Studies - Germany

  • Dr Harold Soh - National University of Singapore (NUS) - Singapore

  • Dr Ben Robins - University of Hertfordshire - UK

Previous May 2022 Next
Mon Tue Wed Thu Fri Sat Sun

Next event


Past events

Dr Ben Robins - University of Hertfordshire, UK 

Wednesday 11 May 2022 - 11:00-12:30

Robots as therapeutic tools: Encouraging social interaction skills in children with autism
There is no video due to confidentiality.

The talk will present several robots including the childlike robot KASPAR which was developed at the University of Hertfordshire, UK, and the ways in which the robots can engage autistic children in simple interactive activities such as turn-taking and imitation games, and how the robots assume the role of a social mediator - encouraging children with autism to interact with other people (children and adults).
The talk will also present several case study examples taken from the work with children with autism at schools and at family homes, showing the possible implementation of KASPAR for therapeutic or educational objectives. These case studies show how the robot can:
  • help to break the isolation,
  • encourage the use of language,
  • mediate child-child or child-adult interaction,
  • help children with autism manage collaborative play,
  • compliment the work in the classroom,
  • provide the opportunity for basic embodied and cognitive learning (e.g. cause an effect; exploring basic emotions such as ‘happy’ and ‘sad’; Visual Perspective Taking skills),
  • at home – bring the family together.

Dr Harold Soh - National University of Singapore (NUS) - Singapore 

Wednesday 4 May 2022 - 11:00-12:30

Human Models for Trust and Communication

There is no video due to confidentiality.


At CLeAR, we believe human models are crucial for fluent human-robot collaboration. When coupled with planning/learning, human models enable robots to display intelligent interactive behavior. However, crafting good human models remains difficult. 
In this talk, we’ll discuss facets of human modeling, starting with our efforts in capturing human trust in the robot. We’ll discuss a perspective that views trust as function learning and the impact this has in multi-task scenarios. Next, we will cover MIRROR, a new approach to quickly learn human models that are useful for (communication) planning. MIRROR is inspired by social projection theory, which hypothesizes that humans use self-models to understand others. In a similar manner, MIRROR leverages robot self-models learned using RL to bootstrap human modeling. We will examine how MIRROR performs relative to state-of-the-art methods using simulations and a human-subjects study. Finally, we’ll discuss how MIRROR can be extended towards real-time human-robot collaboration, and take a broader perspective of the challenges and opportunities that lie ahead. 

Professor Jochen Triesch - Frankfurt Institute for Advanced Studies - Germany

Wednesday 27 April 2022 - 11:00-12:30

Learning to see without supervision

Biological brains can learn much more autonomously than today’s AI systems and robots. How does this work and can we reproduce such autonomous learning abilities in artificial systems? Over the last years we have been studying this question for the case of visual perception. We have constructed models of how human infants learn to see the world in three dimensions, begin to track moving objects and learn to recognize them without any supervision. We have compared these models to biological data and validated them on physical robots. Studying the computational principles underlying these learning processes, we highlight effective information compression as the central driving force behind our brain’s ability to learn to see without supervision. 

Dr Michael Spranger - SONY AI - Japan

Wednesday 30 March 2022 - 11:00-12:30

Outracing Champion Gran Turismo Drivers with Deep Reinforcement Learning

There is no video due to confidentiality. 
Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans. Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical manoeuvres to pass or block opponents while operating their vehicles at their traction limits. Racing simulations, such as the PlayStation game Gran Turismo, faithfully reproduce the non-linear control challenges of real race cars while also encapsulating the complex multi-agent interactions. 
Here we describe how we trained agents for Gran Turismo that can compete with the world’s best e-sports drivers. We combine state-of-the-art, model-free, deep reinforcement learning algorithms with mixed-scenario training to learn an integrated control policy that combines exceptional speed with impressive tactics. In addition, we construct a reward function that enables the agent to be competitive while adhering to racing’s important, but under-specified, sportsmanship rules. We demonstrate the capabilities of our agent, Gran Turismo Sophy, by winning a head-to-head competition against four of the world’s best Gran Turismo drivers. By describing how we trained championship-level racers, we demonstrate the possibilities and challenges of using these techniques to control complex dynamical systems in domains where agents must respect imprecisely defined human norms.

Professor Mehul Bhatt - Örebro University - Sweden

Wednesday 16 March 2022 - 11:00-12:30

Artificial visual intelligence: Perceptual commonsense for human-centred cognitive technologies

This talk addresses computational cognitive vision and perception at the interface of (spatial) language, (spatial) logic, (spatial) cognition, and artificial intelligence. Summarizing recent works, I present general methods for the semantic interpretation of dynamic visuospatial imagery with an emphasis on the ability to perform abstraction, reasoning, and learning with cognitively rooted structured characterizations of commonsense knowledge pertaining to space and motion. I will particularly highlight:
  • explainable models of computational visuospatial commonsense at the interface of symbolic and neural techniques;
  • deep semantics, entailing systematically formalised declarative (neurosymbolic) reasoning and learning with aspects pertaining to space, space-time, motion, actions & events, spatio-linguistic conceptual knowledge; and
  • general foundational commonsense abstractions of space, time, and motion needed for representation mediated (grounded) reasoning and learning with dynamic visuospatial stimuli.
The presented works –demonstrated in the backdrop of applications in autonomous driving, cognitive robotics, visuoauditory media, and cognitive psychology– are intended to serve as a systematic model and general methodology integrating diverse, multi-faceted AI methods pertaining knowledge representation and reasoning, computer vision, and machine learning towards realising practical, human-centred, computational visual intelligence. I will conclude by highlighting a bottom-up interdisciplinary approach –at the confluence of cognition, AI, Interaction, and design science– necessary to better appreciate the complexity and spectrum of varied human-centred challenges for the design and (usable) implementation of (explainable) artificial visual intelligence solutions in diverse human-system interaction contexts.

Dr Séverin Lemaignan - PAL robotics - Spain

Wednesday 2 March 2022 - 11:00-12:30 
Teaching robots autonomy in social situations
Participatory methodologies are now well established in social robotics to generate blueprints of what robots should do to assist humans. The actual implementation of these blueprints, however, remains a technical challenge for us, roboticists, and the end-users are not usually involved at that stage.
In two recent studies, we have however shown that, under the right conditions, robots can directly learn their behaviours from domain experts, replacing the traditional heuristic-based or plan-based robot controllers by autonomously learnt social policies. We have derived from these studies a novel 'end-to-end' participatory methodology called LEADOR, that I will introduce during the seminar.
I will also discuss recent progress on human perception and modelling in a ROS environment with the emerging ROS4HRI standard.


Dr Rachid Alami - National Center of Scientific Research (LAAS-CNRS) - France

Wednesday 9 February 2022 - 11:00-12:30

Models and Decisional issues for Human-Robot Joint Action


This talk will address some key decisional issues that are necessary for a cognitive and interactive robot which shares space and tasks with humans. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. The system is comprehensive since it aims at dealing with a complete set of abilities articulated so that the robot controller is effectively able to conduct in a flexible and fluent manner a human-robot joint action seen as a collaborative problem solving and task achievement. These abilities include geometric reasoning and situation assessment based essentially on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in a separate cognitive model, human-aware task planning and interleaved execution of shared plans. We will also discuss the key issues linked to the pertinence and the acceptability by the human of the robot behaviour, and how this influence qualitatively the robot decisional, planning, control and communication processes.

Dr Alessandra Sciutti - Italian Institute of Technology (IIT) - Italy

Wednesday 2 February 2022 - 11:00-12:30

Cognitive robots for more humane interactions


A cognitive robot is a robot capable to adapt, predict, and pro-actively interact with the environment and communicate with the human partners. Our research leverages on the use of the humanoid robot iCub to test how to build such a cognitive interactive agent. We model the minimal skills necessary for cognitive development, such as the visual features that enable to recognize the presence of other agents in the scene, their internal state and their responses to robot behavior. In a dual approach, we are trying to understand how to modulate robot movement to make it more transparent and understandable to non-expert users. As a next step, we are focusing on the development of simple cognitive architectures that could integrate the sensory and motor capabilities developed in isolation together with memory, internal motivation and learning mechanisms, to achieve personalization and adaptation skills. We believe that only a structured effort toward cognition will in the future allow for more humane machines, able to see the world and people as we do and engage with them in a meaningful manner.


Other workshops and events

  • Dr Aly co-organises the 31st IEEE International Conference on Robot & Human Interactive Communication (Ro-Man), 2022.
  • Workshop (Towards Socially Intelligent Robots in Real-World Applications: Challenges and Intricacies) in conjunction with the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man), 2022.
  • Workshop (Machine Learning for HRI: Bridging the Gap between Action and Perception) in conjunction with the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man), 2022.
  • Workshop (Cultural Influences on Human-Robot Interaction: Today and Tomorrow) in conjunction with the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man), 2022.
  • Workshop (Context-Awareness in Human-Robot Interaction: Approaches and Challenges) in conjunction with the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2022.
  • Workshop (Robot Behavior Adaptation to Human Social Norms) in conjunction with the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man), 2021.
  • Workshop (Robotics for People: Perspectives on Interaction, Learning, and Safety) in conjunction with the conference Robotics: Science and Systems (RSS), 2021.
iCub robot

Annual UK Robotics Week at University of Plymouth

University of Plymouth organises an afternoon of academic presentations, followed by a public exhibition and debate on robotics and artificial intelligence.

Previous seminars

Virtual workshops:

CRNS is excited to launch CRNS@Webinar, our first series of virtual research workshops for the 2020 Autumn semester. In this series you will have the opportunity to engage with cutting-edge research topics delivered by some of the world's top-cited research scholars from the fields of artificial intelligence, machine learning and robotics.

<p>Lady taking part in a webinar at her computer.<br></p>

Coronavirus (COVID-19)

We are constantly monitoring the COVID-19 pandemic and its impact. The University remains open with a safety-first approach to ensure our campuses are ‘covid-secure’ for our staff, students, local community and visitors, in accordance with government guidance.

University advice and guidance on COVID-19

Event photography and video

Please be aware that some of the University of Plymouth's public events (both online and offline) may be attended by University staff, photographers and videographers, for capturing content to be used in University online and offline marketing and promotional materials, for example webpages, brochures or leaflets. If you, or a member of your group, do not wish to be photographed or recorded, please let a member of staff know.