This tutorial is held in the 25th ACM International Conference on Multimodal Interaction (ICMI) at Sorbonne University, Campus Pierre & Marie Curie on 13 Oct. 2023. It is designed to introduce the foundational computing methods for analysing close-up infrared eye images and speech/audio signals from body-worn sensors, the theoretical (e.g. psychophysiological) basis for the relationship between sensing modalities and affect, and the latest development of models for affect analysis. It will cover eye and speech/audio behaviour analysis, statistical modelling and machine learning pipelines, as well as multimodal systems centred on eye and speech/audio behaviour for affect. Various application areas will be discussed, along with examples that illustrate the potential, challenges, and pitfalls of methodologies in wearable contexts.