Join US

JoinUS Our lab welcomes motivated and talented applicants from any race, ethnicity, religion, national origin, eligible age, or disability status. Furthermore, we are devoted to building a collaborative and supportive lab environment.

Read more about our lab philosophy.

We are open to engaging with you in a conversation about your background and research goals for your future career in academia, industry, or other ventures. We always try to provide excellent training across various computational and experimental techniques in affective intelligence. Our funding system is need-based plus merit-based except for undergraduate summer/winter internship programs.


Open Positions

Currently, we have two (2) positions for two (2) undergraduate and two (2) graduate (MS / Ph.D. or Integrated) students (Full Tuition Waiver + Monthly Stipend, 100% 학비 지원 및 추가 학생연구인건비). The following works are in collaboration with KAIST. The research description can be found here and see also our publications.

  • Building Predictive Models of Emotion with Non-linear Data in the Human Brain
  • Developing Affect-driven Closed-Loop AI Systems

These positions will require an applicant with (1) programming experience in Python; (2) basic understanding of deep learning methods; (3) oral and written presentation of results in either Korean or English; and (4) ability to work with KAIST in an integrated team environment.

If interested, please email to affctiv.ai@gmail.com.


Undergraduate Students

Our lab is regularly open for undergraduate students every vacation (summer and winter internship programs, 인하대학교 동/하계 학부연구생 프로그램 연계 가능). After midterm week, we ususally announce our 2 ~ 4 opening for the intership program. During a vacation, the interns rotate through less than 3 ongoing our research projects and are asked to complete simple computational or experimental works. As the period comes to a close, we will talk with them about joining the lab. We highly encourage interns to keep talking to the PI and lab members about what kind of projects they want to shape in our lab. We consider project fit and alignment with our lab values.

Postdoctoral Fellows

Inquiries should be emailed directly to the PI, Dr. Byung Hyung Kim.

Publications

  • 우채은, 이수민, 박수민, 최세린, 류제경, 김병형, “비디오 스윈 트랜스포머 기반의 향상된 Visual Saliency 예측” Journal of Korea Multimedia Society (멀티미디어학회논문지), vol.27, no.11, Nov, 2024.

  • 우채은, 최효선, 김병형, “다중 출력 예측을 적용한 EEG 기반 Valence-Arousal 회귀 모델” Journal of Biomedical Engineering Research (의공학회지), vol.45, no.5, Oct, 2024. [pdf]

  • 방윤석, 김병형, “EEG 기반 SPD-Net에서 리만 프로크루스테스 분석에 대한 연구,” Journal of Biomedical Engineering Research (의공학회지), vol.45, no.4, pp.179-186, Aug, 2024. [pdf] [link]

  • HyoSeon Choi, Dahoon Choi, Netiwit Kaongeon, Byung Hyung Kim, “Detecting Concept Shifts under Different Levels of Self-awareness on Emotion Labeling,” 27th International Conference on Pattern Recognition (ICPR), 2024. Accepted. [code]

  • Hyunwook Kang, Jin Woo Choi, Byung Hyung Kim, “Cascading global and sequential temporal representations with local context modeling for EEG-based emotion recognition,” 27th International Conference on Pattern Recognition (ICPR), 2024. Accepted.

  • Seunghun Koh, Byung Hyung Kim+, Sungho Jo+, “Understanding the User Perception and Experience of Interactive Algorithmic Recourse Customization,” ACM Transactions on Computer-Human Interaction (TOCHI), vol.31, no.3, 2024, +Co-Corresponding Author. [pdf] [link]

  • Kobiljon Toshnazarov, Uichin Lee, Byung Hyung Kim, Varun Mishra, Lismer Andres Caceres Najarro, Youngtae Noh, “SOSW: Stress Sensing with Off-the-Shelf Smartwatches in the Wild,” IEEE Internet of Things Journal (IoT-J), vol.11, no.12, pp.21527-21545, 2024. 2023 JCR IF:10.6, Rank:4/158=2.2% in Computer Science, Information Systems. [pdf] [link]

  • 김승한+, 진태균+, 박혜진, 정희재, 김병형, “가상현실에서 짧은 신호 길이를 활용한 시간 영역 SSVEP-BCI 시스템 속도 향상,” 한국컴퓨터종합학술대회 (KCC), pp.1185-1187, Jun, 2024. +Co-first Authors.

  • 육지훈, 김병형, “복소수 신경회로망 기반의 PPG 신호 복원 모델,” 한국컴퓨터종합학술대회 (KCC), pp.732-734, Jun, 2024.

  • 최효선, 최다훈, 김병형, “EEG 기반 감정 분류에서 MSP를 사용한 OOD 검출 적용”, KIISE Journal (정보과학논문지), vol. 51, no. 5, pp.438-444, 2024. *Invited Paper(KCC 우수논문 초청)*. [pdf]

  • 강현욱, 김병형, “ConTL: CNN, Transformer 및 LSTM의결합을 통한 EEG 기반 감정인식 성능 개선”, KIISE Journal (정보과학논문지), vol. 51, no. 5, pp.454-463, 2024. [pdf]

  • HyoSeon Choi, ChaeEun Woo, JiYun Kong, Byung Hyung Kim, “Multi-Output Regression for Integrated Prediction of Valence and Arousal in EEG-Based Emotion Recognition,” 12th IEEE International Winter Conference on Brain-Computer Interface, Feb, 2024. [code] [pdf] [link]

  • Yunjo Han, Kobiljon E Toshnazarov, Byung Hyung Kim, Youngtae Noh, Uichin Lee, “WatchPPG: An Open-Source Toolkit for PPG-based Stress Detection using Off-the-shelf Smartwatches,” Adjunct Proceedings of ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp) & ACM International Symposium on Wearable Computing (ISWC), pp.208-209, Oct, 2023. [pdf] [link]

  • Netiwit Kaongoen, Jaehoon Choi, Jin Woo Choi, Haram Kwon, Chaeeun Hwang, Guebin Hwang, Byung Hyung Kim, Sungho Jo, “The future of wearable EEG: A review of ear-EEG technology and its applications,” Journal of Neural Engineering, vol.20, no.5, 2023. [pdf] [link]

  • Jaehoon Choi, Netiwit Kaongoen, HyoSeon Choi, Minuk Kim, Byung Hyung Kim+, Sungho Jo+, “Decoding Auditory-Evoked Response in Affective States using Wearable Around-Ear EEG System,” Biomedical Physics and Engineering Express, vol.9, no.5, pp.055029, 2023. +Co-Corresponding Author. [pdf] [link]

  • Byung Hyung Kim, Jin Woo Choi, Honggu Lee, Sungho Jo, “A Discriminative SPD Feature Learning Approach on Riemannian Manifolds for EEG Classification,” Pattern Recognition, vol. 143, no. 109751, 2023. 2022 JCR IF:8, Rank:30/275=10.7% in Engineering, Electrical & Electronic. [pdf] [link]

  • 최다훈, 최효선, 육지훈, 김병형, “EEG 기반 감정 분류에서 OOD 검출 적용,” 한국컴퓨터종합학술대회 (KCC), pp.706–708, Jun, 2023. Oral Presentation (Acceptance < 28%). *우수논문상 (Top < 8% = 53/739)*.

  • 육지훈, 주기현, 박영진, 김병형, “접촉 압력에 무관한 PPG 신호 추출 모델,” 한국컴퓨터종합학술대회 (KCC), pp.1151-1153, Jun, 2023.

  • 김태훈, 백범성, 김성언, 이은정, 안태현, 김병형, “EEG 분류를 위한 와서스테인 거리 손실을 사용한 심층 표현 기반의 도메인 적응 기법,” 한국컴퓨터종합학술대회 (KCC), pp.1042–1044, Jun, 2023. Oral Presentation (Acceptance < 28%).

  • Jin Woo Choi, Haram Kwon, Jaehoon Choi, Netiwit Kaongoen, Chaeeun Hwang, Minuk Kim, Byung Hyung Kim, Sungho Jo, “Neural Applications Using Immersive Virtual Reality: A Review on EEG Studies,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol.31, pp.1645–1658, 2023. 2022 JCR IF:4.9, Rank:4/68=5.1% in Rehabilitation. [pdf] [link]

  • Byung Hyung Kim, Sungho Jo, Sunghee Choi, “ALIS: Learning Affective Causality behind Daily Activities from a Wearable Life-Log System,” IEEE Transactions on Cybernetics, vol.52, no.12, pp.13212–13224, 2022. IF:19.118, JCR Rank:3/146=1.72% in Computer Science, Artificial Intelligence. [pdf] [link]

  • Byung Hyung Kim, Ji Ho Kwak, Minuk Kim, Sungho Jo, “Affect-driven Robot Behavior Learning System using EEG Signals for Less Negative Feelings and More Positive Outcomes,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4162-4167, Sep, 2021. [pdf] [link]

  • Yoon-Je Suh, Byung Hyung Kim, “Riemannian Embedding Banks for Common Spatial Patterns with EEG-based SPD Neural Networks,” 35th AAAI Conference on Artificial Intelligence (AAAI), vol.35, no.1, pp.854–862, Feb, 2021. Acceptance Rate=21.4%, Top-tier in Computer Science. Co-first Author. Corresponding Author. [pdf] [link]

  • Byung Hyung Kim, Yoon-Je Suh, Honggu Lee, Sungho Jo, “Nonlinear Ranking Loss on Riemannian Potato Embedding,” 25th International Conference on Pattern Recognition (ICPR), pp.4348-4355, Jan, 2021. [pdf] [link]

  • Byung Hyung Kim, Seunghun Koh, Sejoon Huh, Sungho Jo, Sunghee Choi, “Improved Explanatory Efficacy on Human Affect and Workload through Interactive Process in Artificial Intelligence,” IEEE Access, vol.8, pp.189013-189024, 2020. [pdf] [link]

  • Byung Hyung Kim, Sungho Jo, Sunghee Choi, “A-Situ: a computational framework for affective labeling from psychological behaviors in real-life situations,” Scientific Reports, vol.10, 15916, Sep, 2020. [pdf]

  • Jin Woo Choi, Byung Hyung Kim, Sejoon Huh, Sungho Jo, “Observing Actions through Immersive Virtual Reality Enhances Motor Imagery Training,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol.28, no.7, pp.1614-1622, 2020. 2019 JCR IF:3.340, Rank:7/68=9.56% in Rehabilitation. Co-first Author. [pdf]

  • Byung Hyung Kim, Sungho Jo, “Deep Physiological Affect Network for the Recognition of Human Emotions,” IEEE Transactions on Affective Computing, vol.11, no.2, pp.230-243, 2020. 2019 JCR IF:7.512, Rank:11/136=7.72% in Computer Science, Artificial Intelligence. [pdf]

  • Seunghun Koh, Hee Ju Wi, Byung Hyung Kim, Sungho Jo, “Personalizing the Prediction: Interactive and Interpretable Machine Learning,” 16th IEEE International Conference on Ubiquitous Robots (UR), pp.354-359, Jun, 2019.

  • Byung Hyung Kim, Sungho Jo, “An Empirical Study on Effect of Physiological Asymmetry for Affective Stimuli in Daily Life,” 5th IEEE International Winter Workshop on Brain-Computer Interface, pp.103–105, Jan, 2017.

  • Byung Hyung Kim, Jinsung Chun, Sungho Jo, “Dynamic Motion Artifact Removal using Inertial Sensors for Mobile BCI,” 7th IEEE International EMBS Conference on Neural Engineering, pp.37–40, Apr, 2015.

  • Byung Hyung Kim, Sungho Jo, “Real-time Motion Artifact Detection and Removal for Ambulatory BCI,” 3rd IEEE International Winter Workshop on Brain-Computer Interface, pp.70–73, Jan, 2015.

  • Minho Kim, Byung Hyung Kim, Sungho Jo, “Quantitative Evaluation of a Low-cost Noninvasive Hybrid Interface based on EEG and Eye Movement,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol.23, no.2, pp.159-168, 2015. 2014 JCR IF:3.972, Rank:3/65=4.61% in Rehabilitation.

  • Byung Hyung Kim, Minho Kim, Sungho Jo, “Quadcopter flight control using a low-cost hybrid interface with EEG-based classification and eye tracking,” Computers in Biology and Medicine, vol.51, pp.82-92, 2014. Honorable Mention Paper(Top 10%).

  • Mingyang Li, Byung Hyung Kim, Anastasios Mourikis, “Real-time Motion Tracking on a Cellphone using Inertial Sensing and a Rolling-Shutter Camera,” IEEE International Conference on Robotics and Automation (ICRA), pp.4712-4719, May, 2013.

  • Byung Hyung Kim, Hak Chul Shin, Phill Kyu Rhee, “Hierarchical Spatiotemporal Modeling for Dynamic Video Trajectory Analysis,” Optical Engineering, vol.50, no.107206, Oct, 2011.

  • Byung Hyung Kim, Danna Gurari, Hough O’Donnell, Margrit Betke, “Interactive Art System for Multiple Users Based on Tracking Hand Movements,” IADIS International Conference Interfaces and Human Computer Interaction (IHCI), Jul, 2011.

Research

Our research on attention broadly addresses the complex interplay of action, cognition, and emotion components and factors that influence them in the human brain. The overarching goal is to address critical challenges to build interactive and intelligent artificial intelligence (AI) systems to discover latent relationships between the connected components. Our study includes behavioral measures coupled with eye-tracking, computational modeling, virtual reality, measures of brain activity, and neuropsychological methods. Our diverse interests and approaches result in natural collaborations. Currently, we are working with as follows:

Collaboration Our works have been published in top-tier AI conferences / journals such as AAAI, IEEE Trans. on Affective Computing, IEEE Trans. on Cybernetics.

Specific themes of our interest include:

Related popular keywords could be Affective Computing, Brain-Computer Interface (BCI), Deep Learning, Geometric, Human-Machine(Robot) Interaction, Machine Learning, Manifold Learning.

Building predictive models of emotion with non-linear data in the human brain

DPAN Manifold The predictive ability of emotional changes is a fundamental measure of affective intelligence since it enables AI systems to characterize neuropsychological activities for recognizing states of feeling. Our group aims to present promising and reliable solutions for learning non-linear data from the human brain, overcoming existing challenges induced by its non-stationary nature. We seek transdisciplinary beyond the data-driven approaches. Motivation, idea, and theoretical frameworks from psychiatry, behavioral science, geometry underlie much of our predictive models. Our scopes include but are not limited to recognize human affect, analyze the spatial-temporal hemispheric structures in different neuropsychological activities, and classify physiological data such as electroencephalogram (EEG), photoplethysmograph (PPG), electromyography (EMG), and facial expression images.

Developing Affect-driven Closed-Loop AI Systems

HRI Physiological responses are widely used for effective human feedback to develop closed-loop systems, thereby increasing the ability of human-AI communication, carrying out practical collaboration. While evoked responses have been widely used as a feedback mechanism to confirm the correctness of their responses, to provide physiological feedback for evaluating the tasks, this approach requires an end-user to be always attentive while interacting with an AI system. In addition, the amount of attention needed for decision-making increases with task difficulty, thereby decreasing human feedback quality over time because of fatigue. To overcome this limitation, our group focuses on investigating an affective process of a symbiotic relationship. By hypothesis, a successful closed-loop system should enable users to develop appropriate trust toward the AI system, by which they can subsequently increase their understanding and reduce negative feelings toward their perception of machine behavior. In turn, the AI system reflects affective feedback by changing how it makes decisions regarding the next action for producing positive outcomes. Hence, our study aims to develop a closed-loop system that learns emotional reactions to machine behaviors and provides affective feedback to optimize their parameters for smooth actions. Further, we consider how user feedback of emotion can impact the user’s affective processes in the brain associated with machine behaviors.

Learning Affective Causality behind Daily Activities

ALIS Human emotions and behaviors are reciprocal components that shape each other in everyday life. While past research on each element has made use of various physiological sensors in many ways, their interactive relationship in the context of daily life has not yet been explored. Our research aims to build interactive AI systems powered by large-scale data from users. With an unprecedented scale of users interacting with wearable technology, the system analyzes how the contexts of the user’s life affect his/her emotional changes and builds causal structures between emotions and observable behaviors in daily situations. Furthermore, we demonstrate that the proposed system enables us to build causal structures to find individual sources of mental relief suited to negative situations in real life.

Controlling Machine Systems by Human Mind in Natural Environments

ALIS Brain-computer interface (BCI) technologies has translated neural information into commands capable of controlling mahcine systems such as robot arms and drones. Can our mind connect with such AI systems easily in daily life by wearing low-cost devices? To answer this question, our research aims to develop hybrid interfaces with EEG-based classification and eye tracking and investigate the feasibilty through a Fitt’s law-based quantitative evaluation method.

Increasing Explanability in AI Systems and Its Effects on Mental Models and Reasoning

Explanatory AI systems have achieved high predictive performance with explanatory features to support their decisions, increasing algorithmic transparency and accountability in real-world environments. However, high predictive accuracy alone is insufficient. Ultimately, AI should be solving the human-agent interaction problem. By hypothesis, explanations that are succinct and easily interpretable to users should enable users to develop a highly efficient mental model. In turn, their mental model should enable them to develop appropriate trust in the AI and perform well when using the AI. The main goal of this research is to build human-interpretable machine learning systems and evaluate their explanatory efficacy along with its effects on the mental models of users.

Members

Members Welcome to the Affective Artificial Intelligence Intelligence Lab. (affctiv.ai)

pokemon Dr. Byung Hyung Kim leads the Affective Artificial Intelligence Lab. He is currently an Assistant Professor of the Department of Artificial Intelligence at Inha University. Previously, he recieved his Ph.D in Computer Science from KAIST under the supervision of Prof. Sungho Jo. He completed his master degree in Computer Science at Boston University, working with Prof. Margrit Betke and Prof. Stan Sclaroff.

His research interests include algorithmic transparency, interpretability in affective intelligence, computational emotional dynamics, cerebral asymmetry and the effects of emotion on brain structure for affective computing, brain-computer interface, and assistive and rehabilitative technology.

He occasionally review the following journals.

  • IEEE Trans. on Affective Computing, IEEE Trans. on Cybernetics, IEEE Trans. on Neural Networks and Learning Systems, IEEE Trans. on Multimedia, Computers in Biology and Medicine, Artificial Intelligence Review

His CV is available here.


hwkang Kihyeon Joo, Lab Representative

  • Master’s Program, 2024.03 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Information and Communication Engineering, Inha University
  • kihyeonjoo at inha.edu

jyyu Jiyoung Yu, Lab Administrator

  • Administrative Staff, 2024.03 - Present.
  • Budget, Research Agreement
  • #908, HighTech Bldg., Inha University
  • yuji0 at inha.ac.kr

hli Hanyu Li

  • Ph.D.’s Program, 2022.09 - Present.
  • Artificial Intelligence, Inha University
  • M.S. Computer Science and Technology, CQUPT, China
  • lihanyu at inha.edu

hli In-Kyung Lee

  • Ph.D.’s Program, 2024.03 - Present.
  • Artificial Intelligence, Inha University
  • M.S. Industrial Engineering, Inha University
  • 9ruddls3 at inha.edu

ysbang Isaac Yoon Seock Bang

  • M.S/Ph.D. Integrated Program, 2022.09 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Industrial Engineering, Inha University
  • isaacrulz93 at inha.edu

woo ChaeEun Woo

  • M.S/Ph.D. Integrated Program, 2023.03 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Electrical Engineering, Gangneung-Wonju National University
  • codms1440 at inha.edu

shlee Sanghyun Lee

  • Master’s Program, 2022.03 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Biomedical Engineering, Korea University
  • dsbjegi at inha.edu

hwkang Hyunwook Kang

  • Master’s Program, 2022.09 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Information Technology, University of Newcastle, Australia
  • hyunwook.kang at inha.edu

sjpark Sejin Park

  • Master’s Program, 2024.03 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Computer Science and Engineering, Sunmoon University
  • sejinpark at inha.edu

smlee Sumin Lee

  • Master’s Program, 2024.03 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Computer Science and Engineering, Kyonggi University
  • leejae7124 at inha.edu

smpark Soomin Park

  • Master’s Program, 2024.09 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Information and Communication Engineering, Inha University
  • minsoominsoo at inha.edu

TaekGyun Kim

  • Master’s Program, 2024.09 - Present.
  • Artificial Intelligence, Inha University
  • B.S. Computer Engineering, Inha University
  • taekgyun.kim at inha.edu

Undergraduate Interns. 김승한, 육지훈, 윤준혁, 이상현, 임희수, 전장혁, 정희재, 진태균, 최세린


Alumni Dahoon Choi(B.S,2022.03~2023.12)@Kakao, HyoSeon Choi(M.S, 2024.08)

Philosophy

“The bird fights its way out of the egg. The egg is the world. Who would be born must first destroy a world.” – Hermann Hesse, Demian

The Affective Artificial Intelligence Lab. (affctiv.ai) is to fight its way out of the egg in science. We know the egg is not just science but also the world. We would first destroy our egg in science by engaging in a lot of creative projects, producing high-quality research uncompromisingly, and striving to work on unique problems using rigorous methods.

We believe noble science is made possible through our lab values. Further, the experience of breaking the science egg underlying these values would strengthen us later to surmount any limits of the world beyond the Affective Artificial Intelligence Lab.

Our motto, the lab values, is “Pokemon”. We use our motto to frame what we value.

pokemon
  • Pride. We are proud of what we achieve and how we produce it.
  • Objectivity. Scientific rigor, clarity, and reproducibility. No shortcuts.
  • Knowledge. We develop our knowledge and skills beyond the confines of our knowledge.
  • Equality. Everyone is of equal value.
  • Mentorship. Mutual benefits from the mentor-mentee relationship are primary responsibilities we give and take graciously.
  • Openness. We are open to all people. Flexibility to new ideas and changes in focus.
  • Network. Personal network between us, collaborators, and the public.

We acknowledge the values are ideals. We are imperfect but growing towards embodying them. We strive to live up to be an ideal pokemon.

If you agree with our philosophy and are interested in what we’ve achieved, please read more about our open positions. Our lab welcomes applicants from any race, ethnicity, religion, national origin, eligible age, or disability status. Furthermore, we are devoted to building a collaborative and supportive lab environment.