This is a lecture on cross-modal learning, covering assignments, deadlines, and requirements for the midterm and reading assignments. The lecture explores transferring information between modalities, focusing on scenarios with limited data. It discusses three paradigms: transfer learning via pre-trained models, co-learning (introducing extra modalities during training), and model induction (inducing behavior across models with black box access). The lecture further explains multitask and transfer learning across different modalities, co-learning through fusion, alignment, translation, and model induction with self-training and co-training algorithms, including their applications and limitations.
Sign in to continue reading, translating and more.
Continue