Home

People

Projects

Publications

Contact Us

Current Projects

Project 1- Physiological Signal Extraction from Video

Some of the most effective methods to recognize the affective state involve the collection of physiological information. These methods suffer from the disadvantage of requiring users the wear biosensors for prolonged periods which can cause discomfort. In this project, we want to use non-contact physiological measurement methods for affect recognition.

Recent studies have shown that a Photoplethysmogram (PPG) can be measured using a video recording of a subject’s face under ambient light condition. This method entails the analysis of subtle colour changes in facial Regions of Interest (ROIs) that contain a high density of blood vessels. PPG can be used to measure the periods between successive heart cycles, otherwise known as Heart Rate Variability (HRV). HRV is highly suggestive of mental stress and arousal.

The most investigated approach for recovering a PPG from a facial video uses the Independent Component Analysis (ICA) computational method. ICA is used for isolating independent signals from a set of source vectors that constitute a linear combination of these signals. In this case, the red, green, and blue channels in a video are the source vectors. Therefore, once ICA is applied, one of the resulting isolated independent signals is the PPG, whereas the others are noise. Given our preliminary investigation on the subject, we found that this method is extremely sensitive to movement and variation in light. We aim to concentrate on these challenges because they have not received enough attention from researchers. Furthermore, we intend to consider other non-contact measurement methods that use the Eularian Video Magnification (EVM) technique. EVM utilizes localized spatial pooling (using a Gaussian pyramid) and temporal filtering to pull out the signal of the cardiac pulse. However, to date, this approach has been effective in extracting heart rate estimates only. According to our preliminary investigations, this technique is less sensitive to movement noise and therefore can potentially be used successfully for HRV signal extraction. Nonetheless, we still have to conceive movement compensation techniques to negate the detrimental effects of movement.

Project 2- Multimodal Affect Detection using Audio Visual Signals

Using audio-video capture of a subject sitting in front of a computer (or interacting with a robot), we can extract the following information modalities:
  • Facial expression features
  • Auditory features, such as pitch and speech rate (which tend to be reflective of arousal)
  • Body posture information
  • Physiological features retrieved from PPG and HRV signals (as described in Project 1)
We recognize the difficulty of assessing affective status using single-source modality. For instance, most of the works pertaining to affect detection through facial expression use still photographs showing deliberately emphasized facial expressions. This is principally due to the difficulty of recognizing and analysing a fleeting moment of expressive facial display and the subtlety in the facial muscular actions triggered by emotions. Therefore, several studies have proposed the use of multimodal solutions for affect detection. We divide these multimodal techniques into two categories:
  1. Methods that use audio-visual and physiological signals captured through a multitude of sensors (where physiological signals are collected by contact sensors such as GSR or ECG)
  2. Methods that use audio-visual channels to extract information related mostly to facial expressions and relevant auditory features
The first type of methods captures a richer array of signals. This comes at the expense of using additional relatively intrusive biosensors. Hence, we propose a multimodal approach in which we simultaneously collect a set of relevant features from multiple information modalities. Unlike previous work on the subject, we want to fuse physiological information captured through non-contact means (as described in Project 1) with the other relevant information modalities expressed in the introduction of this section. We hypothesize that this multimodal approach might render a better understanding of a subject’s emotions and possibly cognitive processes (e.g. attention, mental workload, etc.).

Project 3 - Affective User Experience Assessment Framework

Current usability studies are either performed by expert inspection (e.g. heuristic evaluation, cognitive walkthrough, pluralistic walkthrough) or through the participation of end users. We advocate the latter approach because it has the ability to involve clients in the substantiation of UI quality. UI evaluations involving end users can be subjective in nature by relying on partakers to express their frustration with the interface or voice their opinions about UI design decisions. The subjectivity in this approach makes it, in some cases, dependant on the personality and level of commitment of the participant to the process. Furthermore, the conclusions reached can be dependent on the interpretation of the evaluator.

Empirical usability studies rely on the measurement of relevant parameters such as task completion time, error rate, efficiency, etc. Although these parameters are good indicators of the level of productivity that can be achieved through a particular UI, they lack objective information about the user experience. Emotions elicited through human-machine interactions are likely to shape the experience of users more than the level of productivity attained.

We propose a scheme of empirical user experience study that relies on affective information collected from subjects while interacting with a software application. The affective information is then matched against the operations that were performed on the program being evaluated in order to gauge the emotional response of the user to particular stimuli. This information can be appended to the subjectively collected information to reach precise conclusions about the affective user experience.

Moreover, we would also like to explore the effect of the emotional status of users on their perception of a user interface and ascertain whether certain life critical applications should only be operated when the user is in a certain affective condition. Additionally, we want to study the emotional impact of using non-traditional UI that involve the interaction with robots, haptic devices, or head mounted displays.

Project 4 - Persuasive Games Aimed at Increasing Physical Activity Level

Our lifestyles are becoming increasingly sedentary, a reality that is raising our risk of cardiovascular disease, among a long list of many other possible ailments. To address this issue, several persuasive serious games have been developed to promote a healthier lifestyle through the increase of physical activity. These games typically monitor the amount of exercise the user is engaged in and integrates persuasive advice into the gaming scenario. Nonetheless, these applications mostly exhibit a single persuasive technique targeted for a general audience. This “one-size-fits all” approach might not be the most successful in achieving a desirable degree of persuasion. To be more effective, persuasive applications should provide adaptable and customizable services through the virtues of context-awareness and personalization. We propose the dynamic modulation of gaming scenarios to efficiently present persuasive contents while considering the player’s type (achiever, conqueror, daredevil, mastermind, seeker, socializer, and survivor), affective status, and environmental context (e.g. location, weather, etc.). In particular, the user’s affective status can reflect their level of response to a particular persuasive strategy. This would provide the gaming application with the feedback loop necessary to adjust its persuasion technique.

The game should be deployed on mobile platforms. This further presents new challenges in terms of affect recognition using audio-visual signals, which would be collected in potentially noisy environments that are susceptible to variation in lighting conditions.

Project 5 - Serious Games using Tangible User Interfaces for Children with Special Needs

Serious Video Games have proven during the past few years to be a powerful educational tool that is very popular among children and young adults. Individuals with developmental disorders present limitations in adaptive behaviour that require special education. The sooner they receive an intervention, the more likely they can live an independent life as adults. In this project, we develop video games for children with cognitive and learning difficulties. We use building blocks as a tangible user interface integrated with a 3D graphical application running on a computer to create cognitively stimulating and entertaining games. Hence, with this project, we intend to improve the conceptual, social and practical skills of these individuals using serious gaming.

Multimedia Processing and Interaction Group © - 2016