Publications
2017
Would you like to sample? Robot Engagement in a Shopping Centre. , , , , , , In The 26th IEEE International Symposium on Robot and Human Interactive Communication, .
Nowadays, robots are gradually appearing in public spaces such as libraries, train stations, airports and shopping centres. Only a limited percentage of research literature explores robot applications in public spaces. Studying robot applications in the wild is particularly important for designing commercially viable applications able to meet a specific goal. Therefore, in this paper we conduct an experiment to test a robot application in a shopping centre, aiming to provide results relevant for today's technological capability and market. We compared the performance of a robot and a human in promoting food samples in a shopping centre, a well known commercial application, and then analysed the effects of the type of engagement used to achieve this goal. Our results show that the robot is able to engage customers similarly to a human as expected. However unexpectedly, while an actively engaging human was able to perform better than a passively engaging human, we found the opposite effect for the robot. In this paper we investigate this phenomenon, with possible explanation ready to be explored and tested in subsequent research.
Organizing committee member of the 1st Workshop on Human-Robot Engagement. , , , , , , 26th International Joint Conference on Artificial Intelligence, . [url]
Privacy by Design in Machine Learning Data Collection: A User Experience Experimentation. , , , , , , In Symposium on Designing the User Experience of Machine Learning Systems, AAAI Spring Symposia 2017, . [url]
Designing successful user experiences that use machine learning systems is an area of increasing importance. In supervised machine learning for biometric systems, such as for face recognition, the user experience can be improved. In order to use biometric authentication systems, users are asked for their biometric information together with their personal information. In contexts where there is a frequent and large amount of users to be enrolled, the human expert assisting the data collection process is often replaced in favour of software with a step-by-step user interface. However, this may introduce limitations to the overall user experience of the system. User experience should be addressed from the very beginning , during the design process. Furthermore, data collection might also introduce privacy concerns in users and potentially lead them to not use the system. For these reasons, we propose a privacy by design approach in order to maximize the user experience of the system while reducing privacy concerns of users. To do so we suggest a novel experiment in a Human-Robot interaction setting. We investigate the effects of embodiment and transparency on privacy and user experience. We expect that embodiment would enhance the overall user experience of the system, independently from transparency , whereas we expect that transparency would reduce privacy concerns of the participants. In particular, we forecast that transparency, together with embodiment, would significantly reduce privacy considerations of participants, thus maximising the amount of personal information provided by a user.
Facial Motor Information is Sufficient for Identity Recognition. , , , In 39th Annual Meeting of the Cognitive Science Society, . [url]
The face is a central communication channel providing information about the identities of our interaction partners and their potential mental states expressed by motor configurations. Although it is well known that infants ability to recognise people follows a developmental process, it is still an open question how face identity recognition skills can develop and, in particular, how facial expression and identity processing potentially interact during this developmental process. We propose that by acquiring information of the facial motor configuration observed from face stimuli encountered throughout development would be sufficient to develop a face-space representation. This representation encodes the observed face stimuli as points of a multidimensional psychological space able to assist facial identity and expression recognition. We validate our hypothesis through computational simulations and we suggest potential implications of this understanding with respect to the available findings in face processing.
A Domain-Independent Approach of Cognitive Appraisal Augmented by Higher Cognitive Layer of Ethical Reasoning. , , , In 39th Annual Meeting of the Cognitive Science Society, . [url]
According to cognitive appraisal theory, emotion in an individual is the result of how a situation/event is evaluated by the individual. This evaluation has different outcomes among people and it is often suggested to be operationalised by a set of rules or beliefs acquired by the subject throughout development. Unfortunately, this view is particularly detrimental for computational applications of emotion appraisal. In fact, it requires providing a knowledge base that is particularly difficult to establish and manage, especially in systems designed for highly complex scenarios, such as social robots. In addition, according to appraisal theory, an individual might elicit more than one emotion at a time in reaction to an event. Hence, determining which emotional state should be attributed in relationship to a specific event is another critical issue not yet fully addressed by the available literature. In this work, we show that: (i) the cognitive appraisal process can be realised without a complex set of rules; instead, we propose that this process can be operationalised by knowing only the positive or negative perceived effect the event has on the subject, thus facilitating extensibility and integrability of the emotional system; (ii) the final emotional state to attribute in relation to a specific situation is better explained by ethical reasoning mechanisms. These hypotheses are supported by our experimental results. Therefore, this contribution is particularly significant to provide a more simple and generalisable explanation of cognitive appraisal theory and to promote the integration between theories of emotion and ethics studies, currently often neglected by the available literature.
2016
The face-space duality hypothesis: a computational model. , , , In 38th Annual Meeting of the Cognitive Science Society, . [url]
Valentine's face-space suggests that faces are represented in a psychological multidimensional space according to their perceived properties. However, the proposed framework was initially designed as an account of invariant facial features only, and explanations for dynamic features representation were neglected. In this paper we propose, develop and evaluate a computational model for a twofold structure of the face-space, able to unify both identity and expression representations in a single implemented model. To capture both invariant and dynamic facial features we introduce the face-space duality hypothesis and subsequently validate it through a mathematical presentation using a general approach to dimensionality reduction. Two experiments with real facial images show that the proposed face-space: (1) supports both identity and expression recognition, and (2) has a twofold structure anticipated by our formal argument.
2014
Directing human attention with pointing. , , , , , , , , In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, . [pdf] [doi]
Pointing is a typical means of directing a human's attention to a specific object or event. Robot pointing behaviours that direct the attention of humans are critical for human-robot interaction, communication and collaboration. In this paper, we describe an experiment undertaken to investigate human comprehension of a humanoid robot's pointing behaviour. We programmed a NAO robot to point to markers on a large screen and asked untrained human subjects to identify the target of the robots pointing gesture. We found that humans are able to identify robot pointing gestures. Human subjects achieved higher levels of comprehension when the robot pointed at objects closer to the gesturing arm and when they stood behind the robot. In addition, we found that subjects performance improved with each assessment task. These new results can be used to guide the design of effective robot pointing behaviours that enable more effective robot to human communication and improve human-robot collaborative performance.
Socially Impaired Robots: Human Social Disorders and Robots' Socio-Emotional Intelligence. , , , Chapter in Social Robotics: 6th International Conference, ICSR 2014, Sydney, NSW, Australia, October 27-29, 2014. Proceedings, Springer International Publishing, . [url] [doi]
Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots’ future capability requirements. We provide evidence supporting the idea that social robots will require a combination of emotional intelligence and social intelligence, namely socio-emotional intelligence. We argue that a robot with a simple socio-emotional process requires a simulation-driven model of intelligence. Finally, we provide some critical guidelines for designing future socio-emotional robots.
Affective facial expression processing via simulation: A probabilistic model. , , , , In Biologically Inspired Cognitive Architectures, volume 10, . [url] [doi]
Understanding the mental state of other people is an important skill for intelligent agents and robots to operate within social environments. However, the mental processes involved in ‘mind-reading’ are complex. One explanation of such processes is Simulation Theory—it is supported by a large body of neuropsychological research. Yet, determining the best computational model or theory to use in simulation-style emotion detection, is far from being understood. In this work, we use Simulation Theory and neuroscience findings on Mirror-Neuron Systems as the basis for a novel computational model, as a way to handle affective facial expressions. The model is based on a probabilistic mapping of observations from multiple identities onto a single fixed identity (‘internal transcoding of external stimuli’), and then onto a latent space (‘phenomenological response’). Together with the proposed architecture we present some promising preliminary results.
Co-Chair of 1st Workshop on Attention for Social Intelligence. , , UTS ePress, . [url]
Social robots will not be accepted in society unless they exhibit social intelligence. They will need cognitive capabilities that support social intelligence. Research from various fields, including artificial intelligence, robotics, computer vision, cognitive psychology, cognitive science and neuroscience, provides crucial knowledge about such necessary cognitive skills. However, further investigations are still required to orchestrate these capabilities in order to achieve social intelligence. Attention is known to play a crucial role in intelligence. It affects other cognitive processes, including perception, action selection, decision making, planning, memory, emotion and learning. It potentially provides a mechanism for managing the cognitive capabilities required to achieve social intelligence. Research on attention remains challenging in all levels from visual perception to decision making and planning. This workshop aims to examine the role of attention in social intelligence and to foster the emergent area of attention in social robotics.