iCub robot
  • Online

Save event
The Centre for Robotics and Neural Systems hosts the CRNS talks series with prestigious and passionate invited speakers from both research communities and industry. 
The seminars are held online, open to the general public and free to attend. For further details, please contact Dr Amir Aly; the coordinator of the CRNS talk series.

Confirmed speakers

  • To be rescheduled - Dr Ali Agha, NASA JPL, USA 
Previous December 2022 Next
Mon Tue Wed Thu Fri Sat Sun
28
29
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1

Past events

Dr Dileep GeorgeGoogle DeepMind, USA 

Wednesday 15 March 2023: 15:00–16:30
Reverse engineering cortical microcircuit models of visual perception 
Abstract
Although deep learning has made tremendous strides in visual recognition and generation, a significant gap remains between human and machine perception. In this talk, I will argue that models that use stochastic variables, lateral interactions, and dynamic inference might be required to close this gap. I will then describe a generative model, Recursive Cortical Networks (RCN), that partially meets these requirements and demonstrated excellent performance on some visual task benchmarks.
Using RCN we derive a family of anatomically instantiated and functional cortical circuit models. Efficient inference and generalization guided the representational choices in the original computational model. The cortical circuit model is derived by systematically comparing the computational requirements of this model with known anatomical constraints.
The derived model suggests precise functional roles for the feed-forward, feedback, and lateral connections observed in different laminae and columns, assigns a computational role for the path through the thalamus, predicts the interactions between blobs and interblobs, and offers an algorithmic explanation for the innate inter-laminar connectivity between clonal neurons within a cortical column. The model also explains several visual phenomena, including the subjective contour effect, and neon-color spreading effect, with circuit-level precision. Our work paves a new path forward in understanding the logic of cortical and thalamic circuits.
 

Professor Jun Tani – Okinawa Institute of Science and Technology (OIST), Japan

Wednesday 15 February 2023: 11:00–12:30
Exploring robotic minds by extending the framework of predictive coding and active inference
Abstract
The focus of my research has been to investigate how cognitive agents can develop structural representation and functions via iterative interaction with the world, exercising agency and learning from resultant perceptual experience. For this purpose, my team has developed various models analogous to predictive coding and active inference frameworks based on the free energy principle. Those models have been used for conducting diverse robotics experiments which include goal-directed planning and replanning in a dynamic environment, social embodied interactions with others, development of the higher cognitive competency for executive control for attention and working memory, embodied language, and others.
The talk focuses on a set of emergent phenomena which we observed in those robotics experiments. These findings could inform us of possible non-trivial accounts for understanding embodied cognition including the issues of subjective experiences.
 

Dr Silvia Rossi – University of Naples "Federico II", Italy 

Wednesday 2 November 2022: 11:00–12:30 
Behavioral Adaptation and Transparency in HRI
Abstract
To effectively collaborate with people, robots are expected to detect and profile the users they are interacting with and modify and adapt their behaviour according to the learned models. Many research challenges for personal robotics are related to the need for a high degree of personalization of the robot's behaviour to the specific user’s needs and preferences. It is crucial to investigate how robots can better adjust to their human interaction partners and how the behaviour of a robot can be personalized based on the user’s characteristics such as personality and cognitive profile, and their dynamic changes. However, in practice, it is not just a matter of performing an interaction task correctly, but mainly of performing such tasks in a way that is socially acceptable to humans. The more robots become adaptive and flexible, the more their behaviours need to be transparent. 
When interacting with complex intelligent artifacts, humans inevitably formulate expectations to understand and predict their behaviours. Indeed, robots’ behaviours should be self-explanatory so that users can be confident in their knowledge of what these systems are doing and why. In this talk, we will present and discuss several topics connected to adaptivity in robot behaviour and the need for legibility and transparency.
 

Dr Ben Robins – University of Hertfordshire, UK 

Wednesday 11 May 2022: 11:00–12:30
Robots as therapeutic tools: Encouraging social interaction skills in children with autism
Abstract
The talk will present several robots including the childlike robot KASPAR which was developed at the University of Hertfordshire, UK, and the ways in which the robots can engage autistic children in simple interactive activities such as turn-taking and imitation games, and how the robots assume the role of a social mediator - encouraging children with autism to interact with other people (children and adults).
The talk will also present several case study examples taken from the work with children with autism at schools and at family homes, showing the possible implementation of KASPAR for therapeutic or educational objectives. These case studies show how the robot can:
  • help to break the isolation
  • encourage the use of language
  • mediate child-child or child-adult interaction
  • help children with autism manage collaborative play
  • compliment the work in the classroom
  • provide the opportunity for basic embodied and cognitive learning (e.g. cause an effect; exploring basic emotions such as ‘happy’ and ‘sad’; Visual Perspective Taking skills)
  • at home – bring the family together.
 

Dr Harold Soh – National University of Singapore (NUS) – Singapore 

Wednesday 4 May 2022: 11:00–12:30 
Human Models for Trust and Communication
There is no video due to confidentiality.
Abstract
At CLeAR, we believe human models are crucial for fluent human-robot collaboration. When coupled with planning/learning, human models enable robots to display intelligent interactive behavior. However, crafting good human models remains difficult. 
In this talk, we’ll discuss facets of human modelling, starting with our efforts in capturing human trust in the robot. We’ll discuss a perspective that views trust as function learning and the impact this has in multi-task scenarios. Next, we will cover MIRROR, a new approach to quickly learn human models that are useful for (communication) planning. MIRROR is inspired by social projection theory, which hypothesises that humans use self-models to understand others. In a similar manner, MIRROR leverages robot self-models learned using RL to bootstrap human modelling. We will examine how MIRROR performs relative to state-of-the-art methods using simulations and a human-subjects study. Finally, we’ll discuss how MIRROR can be extended towards real-time human-robot collaboration, and take a broader perspective of the challenges and opportunities that lie ahead. 
 

Professor Jochen Triesch – Frankfurt Institute for Advanced Studies – Germany

Wednesday 27 April 2022: 11:00–12:30
Learning to see without supervision
Abstract
Biological brains can learn much more autonomously than today’s AI systems and robots. How does this work and can we reproduce such autonomous learning abilities in artificial systems? Over the last years we have been studying this question for the case of visual perception. We have constructed models of how human infants learn to see the world in three dimensions, begin to track moving objects and learn to recognize them without any supervision. We have compared these models to biological data and validated them on physical robots. Studying the computational principles underlying these learning processes, we highlight effective information compression as the central driving force behind our brain’s ability to learn to see without supervision. 
 

Dr Michael Spranger – SONY AI – Japan

Wednesday 30 March 2022: 11:00–12:30
Outracing Champion Gran Turismo Drivers with Deep Reinforcement Learning
There is no video due to confidentiality. 
Abstract
Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans. Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical manoeuvres to pass or block opponents while operating their vehicles at their traction limits. Racing simulations, such as the PlayStation game Gran Turismo, faithfully reproduce the non-linear control challenges of real race cars while also encapsulating the complex multi-agent interactions. 
Here we describe how we trained agents for Gran Turismo that can compete with the world’s best e-sports drivers. We combine state-of-the-art, model-free, deep reinforcement learning algorithms with mixed-scenario training to learn an integrated control policy that combines exceptional speed with impressive tactics. In addition, we construct a reward function that enables the agent to be competitive while adhering to racing’s important, but under-specified, sportsmanship rules. We demonstrate the capabilities of our agent, Gran Turismo Sophy, by winning a head-to-head competition against four of the world’s best Gran Turismo drivers. By describing how we trained championship-level racers, we demonstrate the possibilities and challenges of using these techniques to control complex dynamical systems in domains where agents must respect imprecisely defined human norms.
 

Professor Mehul Bhatt – Örebro University – Sweden

Wednesday 16 March 2022: 11:00–12:30
Artificial visual intelligence: Perceptual commonsense for human-centred cognitive technologies
Abstract:
This talk addresses computational cognitive vision and perception at the interface of (spatial) language, (spatial) logic, (spatial) cognition, and artificial intelligence. Summarizing recent works, I present general methods for the semantic interpretation of dynamic visuospatial imagery with an emphasis on the ability to perform abstraction, reasoning, and learning with cognitively rooted structured characterizations of commonsense knowledge pertaining to space and motion. I will particularly highlight:
  • explainable models of computational visuospatial commonsense at the interface of symbolic and neural techniques;
  • deep semantics, entailing systematically formalised declarative (neurosymbolic) reasoning and learning with aspects pertaining to space, space-time, motion, actions & events, spatio-linguistic conceptual knowledge; and
  • general foundational commonsense abstractions of space, time, and motion needed for representation mediated (grounded) reasoning and learning with dynamic visuospatial stimuli.
The presented works – demonstrated in the backdrop of applications in autonomous driving, cognitive robotics, visuoauditory media, and cognitive psychology – are intended to serve as a systematic model and general methodology integrating diverse, multi-faceted AI methods pertaining knowledge representation and reasoning, computer vision, and machine learning towards realising practical, human-centred, computational visual intelligence. I will conclude by highlighting a bottom-up interdisciplinary approach – at the confluence of cognition, AI, Interaction, and design science – necessary to better appreciate the complexity and spectrum of varied human-centred challenges for the design and (usable) implementation of (explainable) artificial visual intelligence solutions in diverse human-system interaction contexts.
 

Dr Séverin Lemaignan – PAL robotics – Spain

Wednesday 2 March 2022: 11:00–12:30 
Teaching robots autonomy in social situations
Abstract
Participatory methodologies are now well established in social robotics to generate blueprints of what robots should do to assist humans. The actual implementation of these blueprints, however, remains a technical challenge for us, roboticists, and the end-users are not usually involved at that stage.
In two recent studies, we have however shown that, under the right conditions, robots can directly learn their behaviours from domain experts, replacing the traditional heuristic-based or plan-based robot controllers by autonomously learnt social policies. We have derived from these studies a novel 'end-to-end' participatory methodology called LEADOR, that I will introduce during the seminar.
I will also discuss recent progress on human perception and modelling in a ROS environment with the emerging ROS4HRI standard.
 

Dr Rachid Alami – National Center of Scientific Research (LAAS-CNRS) – France 

Wednesday 9 February 2022: 11:00–12:30
Models and Decisional issues for Human-Robot Joint Action
Abstract
This talk will address some key decisional issues that are necessary for a cognitive and interactive robot which shares space and tasks with humans. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. The system is comprehensive since it aims at dealing with a complete set of abilities articulated so that the robot controller is effectively able to conduct in a flexible and fluent manner a human-robot joint action seen as a collaborative problem solving and task achievement. These abilities include geometric reasoning and situation assessment based essentially on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in a separate cognitive model, human-aware task planning and interleaved execution of shared plans. We will also discuss the key issues linked to the pertinence and the acceptability by the human of the robot behaviour, and how this influence qualitatively the robot decisional, planning, control and communication processes.
 

Dr Alessandra Sciutti – Italian Institute of Technology (IIT) – Italy

Wednesday 2 February 2022: 11:00–12:30
Cognitive robots for more humane interactions
Abstract
A cognitive robot is a robot capable to adapt, predict, and pro-actively interact with the environment and communicate with the human partners. Our research leverages on the use of the humanoid robot iCub to test how to build such a cognitive interactive agent. We model the minimal skills necessary for cognitive development, such as the visual features that enable to recognize the presence of other agents in the scene, their internal state and their responses to robot behavior. In a dual approach, we are trying to understand how to modulate robot movement to make it more transparent and understandable to non-expert users. As a next step, we are focusing on the development of simple cognitive architectures that could integrate the sensory and motor capabilities developed in isolation together with memory, internal motivation and learning mechanisms, to achieve personalization and adaptation skills. We believe that only a structured effort toward cognition will in the future allow for more humane machines, able to see the world and people as we do and engage with them in a meaningful manner.
 

Other workshops and events

  • Dr Aly co-organises the 31st IEEE International Conference on Robot & Human Interactive Communication (Ro-Man), 2022.
  • Workshop (Towards Socially Intelligent Robots in Real-World Applications: Challenges and Intricacies) in conjunction with the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man), 2022.
  • Workshop (Machine Learning for HRI: Bridging the Gap between Action and Perception) in conjunction with the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man), 2022.
  • Workshop (Cultural Influences on Human-Robot Interaction: Today and Tomorrow) in conjunction with the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man), 2022.
  • Workshop (Context-Awareness in Human-Robot Interaction: Approaches and Challenges) in conjunction with the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2022.
  • Workshop (Robot Behavior Adaptation to Human Social Norms) in conjunction with the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man), 2021.
  • Workshop (Robotics for People: Perspectives on Interaction, Learning, and Safety) in conjunction with the conference Robotics: Science and Systems (RSS), 2021.
iCub robot

Annual UK Robotics Week at University of Plymouth

University of Plymouth organises an afternoon of academic presentations, followed by a public exhibition and debate on robotics and artificial intelligence.  
Discover cutting-edge robotics research taking place at the University 
 

Virtual research workshops

CRNS ran a series of virtual research workshops in 2020.  
In this series you will have the opportunity to engage with cutting-edge research topics delivered by some of the world's top-cited research scholars from the fields of artificial intelligence, machine learning and robotics. 
Explore the CRNS@Webinar archive which includes recorded seminars and PDF materials which have kindly been made available by the invited speakers whom are contributing to this initiative.
Lady taking part in a webinar at her computer.
 

Event photography and video

Please be aware that some of the University of Plymouth's public events (both online and offline) may be attended by University staff, photographers and videographers, for capturing content to be used in University online and offline marketing and promotional materials, for example webpages, brochures or leaflets. If you, or a member of your group, do not wish to be photographed or recorded, please let a member of staff know.