Our lab welcomes motivated and talented applicants from any race, ethnicity, religion, national origin, eligible age, or disability status. Furthermore, we are devoted to building a collaborative and supportive lab environment.
Read more about our lab philosophy.
We are open to engaging with you in a conversation about your background and research goals for your future career in academia, industry, or other ventures. We always try to provide excellent training across various computational and experimental techniques in affective intelligence. Our funding system is need-based plus merit-based except for undergraduate summer/winter internship programs.
Currently, we have two (2) positions for two (2) undergraduate and two (2) graduate (MS / Ph.D.) students (100% 학비 지원 및 추가 학생연구인건비). The following works are in collaboration with KAIST. The research description can be found here and see also our publications.
- Building Predictive Models of Emotion with Non-linear Data in the Human Brain
- Developing Affect-driven Closed-Loop AI Systems
These positions will require an applicant with (1) programming experience in Python; (2) basic understanding of deep learning methods; (3) oral and written presentation of results in either Korean or English; and (4) ability to work with KAIST in an integrated team environment.
If interested, please email to firstname.lastname@example.org.
Our lab is regularly open for undergraduate students every vacation (summer and winter internship programs, 인하대학교 동/하계 학부연구생 프로그램 연계 가능(최대 3학점 취득)). After midterm week, we ususally announce our 2 ~ 4 opening for the intership program (학생인건비 지급). During a vacation, the interns rotate through less than 3 ongoing our research projects and are asked to complete simple computational or experimental works. As the period comes to a close, we will talk with them about joining the lab. We highly encourage interns to keep talking to the PI and lab members about what kind of projects they want to shape in our lab. We consider project fit and alignment with our lab values.
Inquiries should be emailed directly to the PI, Dr. Byung Hyung Kim.
Byung Hyung Kim, Ji Ho Kwak, Minuk Kim, Sungho Jo, “Affect-driven Robot Behavior Learning System using EEG Signals for Less Negative Feelings and More Positive Outcomes,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4162-4167, Sep, 2021. [pdf]
Byung Hyung Kim, Sungho Jo, Sunghee Choi, “ALIS: Learning Affective Causality behind Daily Activities from a
Wearable Life-Log System,” IEEE Transactions on Cybernetics, 2021, early access, doi:10.1109/TCYB.2021.3106638. IF:11.079, JCR Rank:1⁄22=2.27% in Computer Science, Cybernetics. [pdf]
Yoon-Je Suh, Byung Hyung Kim, “Riemannian Embedding Banks for Common Spatial Patterns with EEG-based
SPD Neural Networks,” 35th AAAI Conference on Artificial Intelligence (AAAI), vol.35, no.1, pp.854–862, Feb, 2021. Acceptance Rate=21.4%, Top-tier in Computer Science. Co-first Author. Corresponding Author. [pdf]
Byung Hyung Kim, Yoon-Je Suh, Honggu Lee, Sungho Jo, “Nonlinear Ranking Loss on Riemannian Potato Embedding,” 25th International Conference on Pattern Recognition (ICPR), pp.4348-4355, Jan, 2021. [pdf]
Byung Hyung Kim, Seunghun Koh, Sejoon Huh, Sungho Jo, Sunghee Choi, “Improved Explanatory Efficacy on Human Affect and Workload through Interactive Process in Artificial Intelligence,” IEEE Access, vol.8, pp.189013-189024, 2020. [pdf]
Byung Hyung Kim, Sungho Jo, Sunghee Choi, “A-Situ: a computational framework for affective labeling from psychological behaviors in real-life situations,” Scientific Reports, vol.10, 15916, Sep, 2020. [pdf]
Jin Woo Choi, Byung Hyung Kim, Sejoon Huh, Sungho Jo, “Observing Actions through Immersive Virtual Reality Enhances Motor Imagery Training,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol.28, no.7, pp.1614-1622, 2020. IF:3.340, JCR Rank:7⁄68=9.56% in Rehabilitation. Co-first Author. [pdf]
Byung Hyung Kim, Sungho Jo, “Deep Physiological Affect Network for the Recognition of Human Emotions,” IEEE Transactions on Affective Computing, vol.11, no.2, pp.230-243, 2020. IF:7.512, JCR Rank:11⁄136=7.72% in Computer Science, Artificial Intelligence. [pdf]
Seunghun Koh, Hee Ju Wi, Byung Hyung Kim, Sungho Jo, “Personalizing the Prediction: Interactive and Interpretable Machine Learning,” 16th IEEE International Conference on Ubiquitous Robots (UR), pp.354-359, Jun, 2019.
Byung Hyung Kim, Sungho Jo, “An Empirical Study on Effect of Physiological Asymmetry for Affective Stimuli in Daily Life,” 5th IEEE International Winter Workshop on Brain-Computer Interface, pp.103–105, Jan, 2017.
Byung Hyung Kim, Jinsung Chun, Sungho Jo, “Dynamic Motion Artifact Removal using Inertial Sensors for Mobile BCI,” 7th IEEE International EMBS Conference on Neural Engineering, pp.37–40, Apr, 2015.
Byung Hyung Kim, Sungho Jo, “Real-time Motion Artifact Detection and Removal for Ambulatory BCI,” 3rd IEEE International Winter Workshop on Brain-Computer Interface, pp.70–73, Jan, 2015.
Minho Kim, Byung Hyung Kim, Sungho Jo, “Quantitative Evaluation of a Low-cost Noninvasive Hybrid Interface based on EEG and Eye Movement,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol.23, no.2, pp.159-168, 2015. IF:3.972, JCR Rank:3⁄65=4.61% in Rehabilitation.
Byung Hyung Kim, Minho Kim, Sungho Jo, “Quadcopter flight control using a low-cost hybrid interface with EEG-based classification and eye tracking,” Computers in Biology and Medicine, vol.51, pp.82-92, 2014. Honorable Mention Paper(Top 10%).
Mingyang Li, Byung Hyung Kim, Anastasios Mourikis, “Real-time Motion Tracking on a Cellphone using Inertial Sensing and a Rolling-Shutter Camera,” IEEE International Conference on Robotics and Automation (ICRA), pp.4712-4719, May, 2013.
Byung Hyung Kim, Hak Chul Shin, Phill Kyu Rhee, “Hierarchical Spatiotemporal Modeling for Dynamic Video Trajectory Analysis,” Optical Engineering, vol.50, no.107206, Oct, 2011.
Byung Hyung Kim, Danna Gurari, Hough O’Donnell, Margrit Betke, “Interactive Art System for Multiple Users Based on Tracking Hand Movements,” IADIS International Conference Interfaces and Human Computer Interaction (IHCI), Jul, 2011.
Our research on attention broadly addresses the complex interplay of action, cognition, and emotion components and factors that influence them in the human brain. The overarching goal is to address critical challenges to build interactive and intelligent artificial intelligence (AI) systems to discover latent relationships between the connected components. Our study includes behavioral measures coupled with eye-tracking, computational modeling, virtual reality, measures of brain activity, and neuropsychological methods. Our diverse interests and approaches result in natural collaborations with other research groups.
Our works have been published in top-tier AI conferences / journals such as AAAI, IEEE Trans. on Affective Computing, IEEE Trans. on Cybernetics.
Specific themes of our interest include:
Related popular keywords could be Affective Computing, Brain-Computer Interface (BCI), Deep Learning, Geometric, Human-Machine(Robot) Interaction, Machine Learning, Manifold Learning.
Building predictive models of emotion with non-linear data in the human brain
The predictive ability of emotional changes is a fundamental measure of affective intelligence since it enables AI systems to characterize neuropsychological activities for recognizing states of feeling. Our group aims to present promising and reliable solutions for learning non-linear data from the human brain, overcoming existing challenges induced by its non-stationary nature. We seek transdisciplinary beyond the data-driven approaches. Motivation, idea, and theoretical frameworks from psychiatry, behavioral science, geometry underlie much of our predictive models. Our scopes include but are not limited to recognize human affect, analyze the spatial-temporal hemispheric structures in different neuropsychological activities, and classify physiological data such as electroencephalogram (EEG), photoplethysmograph (PPG), electromyography (EMG), and facial expression images.
Developing Affect-driven Closed-Loop AI Systems
Physiological responses are widely used for effective human feedback to develop closed-loop systems, thereby increasing the ability of human-AI communication, carrying out practical collaboration. While evoked responses have been widely used as a feedback mechanism to confirm the correctness of their responses, to provide physiological feedback for evaluating the tasks, this approach requires an end-user to be always attentive while interacting with an AI system. In addition, the amount of attention needed for decision-making increases with task difficulty, thereby decreasing human feedback quality over time because of fatigue.
To overcome this limitation, our group focuses on investigating an affective process of a symbiotic relationship. By hypothesis, a successful closed-loop system should enable users to develop appropriate trust toward the AI system, by which they can subsequently increase their understanding and reduce negative feelings toward their perception of machine behavior. In turn, the AI system reflects affective feedback by changing how it makes decisions regarding the next action for producing positive outcomes. Hence, our study aims to develop a closed-loop system that learns emotional reactions to machine behaviors and provides affective feedback to optimize their parameters for smooth actions. Further, we consider how user feedback of emotion can impact the user’s affective processes in the brain associated with machine behaviors.
Learning Affective Causality behind Daily Activities
Human emotions and behaviors are reciprocal components that shape each other in everyday life. While past research on each element has made use of various physiological sensors in many ways, their interactive relationship in the context of daily life has not yet been explored. Our research aims to build interactive AI systems powered by large-scale data from users. With an unprecedented scale of users interacting with wearable technology, the system analyzes how the contexts of the user’s life affect his/her emotional changes and builds causal structures between emotions and observable behaviors in daily situations. Furthermore, we demonstrate that the proposed system enables us to build causal structures to find individual sources of mental relief suited to negative situations in real life.
Controlling Machine Systems by Human Mind in Natural Environments
Brain-computer interface (BCI) technologies has translated neural information into commands capable of controlling mahcine systems such as robot arms and drones. Can our mind connect with such AI systems easily in daily life by wearing low-cost devices? To answer this question, our research aims to develop hybrid interfaces with EEG-based classification and eye tracking and investigate the feasibilty through a Fitt’s law-based quantitative evaluation method.
Increasing Explanability in AI Systems and Its Effects on Mental Models and Reasoning
AI systems have achieved high predictive performance with explanatory features to support their decisions, increasing algorithmic transparency and accountability in real-world environments. However, high predictive accuracy alone is insufficient. Ultimately, AI should be solving the human-agent interaction problem. By hypothesis, explanations that are succinct and easily interpretable to users should enable users to develop a highly efficient mental model. In turn, their mental model should enable them to develop appropriate trust in the AI and perform well when using the AI. The main goal of this research is to build human-interpretable machine learning systems and evaluate their explanatory efficacy along with its effects on the mental models of users.
Welcome to the Affective Intelligence Lab. (affctiv.ai)
Dr. Byung Hyung Kim leads the Affective Intelligence Lab. He is currently an Assistant Professor of the Department of Artificial Intelligence at Inha University. Previously, he recieved his Ph.D in Computer Science from KAIST under the supervision of Prof. Sungho Jo. He completed his master degree in Computer Science at Boston University, working with Prof. Margrit Betke and Prof. Stan Sclaroff.
His research interests include algorithmic transparency, interpretability in affective intelligence, computational emotional dynamics, cerebral asymmetry and the effects of emotion on brain structure for affective computing, brain-computer interface, and assistive and rehabilitative technology.
He occasionally review the following journals.
- IEEE Trans. on Affective Computing, IEEE Trans. on Cybernetics, IEEE Trans. on Computational Social Systems, IEEE Trans. on Multimedia, Computers in Biology and Medicine, Artificial Intelligence Review
His CV is available here.
- Dept. Industrial Engineering
- 13eye42 at naver.com
“The bird fights its way out of the egg. The egg is the world. Who would be born must first destroy a world.” – Hermann Hesse, Demian
The Affective Intelligence Lab. (affctiv.ai) is to fight its way out of the egg in science. We know the egg is not just science but also the world. We would first destroy our egg in science by engaging in a lot of creative projects, producing high-quality research uncompromisingly, and striving to work on unique problems using rigorous methods.
We believe noble science is made possible through our lab values. Further, the experience of breaking the science egg underlying these values would strengthen us later to surmount any limits of the world beyond the Affective Intelligence Lab.
Our motto, the lab values, is “Pokemon”. We use our motto to frame what we value.
- Pride. We are proud of what we achieve and how we produce it.
- Objectivity. Scientific rigor, clarity, and reproducibility. No shortcuts.
- Knowledge. We develop our knowledge and skills beyond the confines of our knowledge.
- Equality. Everyone is of equal value.
- Mentorship. Mutual benefits from the mentor-mentee relationship are primary responsibilities we give and take graciously.
- Openness. We are open to all people. Flexibility to new ideas and changes in focus.
- Network. Personal network between us, collaborators, and the public.
We acknowledge the values are ideals. We are imperfect but growing towards embodying them. We strive to live up to be an ideal pokemon.
If you agree with our philosophy and are interested in what we’ve achieved, please read more about our open positions. Our lab welcomes applicants from any race, ethnicity, religion, national origin, eligible age, or disability status. Furthermore, we are devoted to building a collaborative and supportive lab environment.