arxiv preprint - Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | AI Breakdown | Podwise