Kamyab Yazdipaz

I'm a PhD student in Mechanical Engineering (Robotics) at Washington State University, conducting research in the SIAS Lab, under the supervision of Dr. Mehdi Hosseinzadeh.

My research lies at the intersection of robot perception, computer vision, and human–robot interaction. My work focuses on enabling intelligent robotic systems to reason about human behavior, perceptual awareness, and intent through multimodal sensing and learning-based methods. I am particularly interested in integrating vision–language models, probabilistic reasoning, and motion planning to support robust mobile robot navigation and decision-making in complex, cluttered environments, enabling safe, adaptive, and socially-aware robot behavior in human-centered settings. </tr> </table> ---

Education

Washington State University (WSU)
PhD in Mechanical Engineering
2025 – Present
Iran University of Science and Technology (IUST)
Master of Science in Mechanical Engineering
2020 – 2023
Amirkabir University of Technology (AUT)
Bachelor of Science in Mechanical Engineering
2015 – 2020
---

Research

Safe Robot Action Planning under Human Behavioral Uncertainty with Probabilistic and Vision–Language Reasoning


K, Yazdipaz, M. Amiri, M. Hosseinzadeh
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Submitted), 2026
[video]

The proposed action planner update the robot’s belief about human cooperation by fusing two latent attributes: Perceptual Awareness (inferred from head pose and gestures) and Navigational Responsiveness (inferred from trajectory adaptations). The framework dynamically weights these attributes based on the interaction phase to address behavioral uncertainty.
Dynamic Embedding Optimization

Experimental data; CASE I: Aware/Responsive, CASE II: Unaware/unresponsive, CASE III: Aware/unresponsive,CASE IV: Initially Unaware/Responsive.

---

Robust and Efficient Phase Estimation in Legged Robots via Signal Imaging and Deep Neural Networks


K. Yazdipaz, N. Kohli, S.A. Golestaneh, M. Shahbazi
IEEE Access, 2025
[paper] | [video]

A robust and real-time method for phase estimation in legged robots is proposed using signal imaging and lightweight deep neural networks. Time-series data from IMUs and joint encoders are transformed into informative phase images through techniques such as stacked channel imaging and recurrence plots, enabling accurate detection of stance and flight phases without relying on fragile force sensors.
Robust Phase Estimation

Phase estimation results: transforming IMU and encoder signals into image representations for deep learning-based gait classification.