arxiv preprint - Evaluating Human Alignment and Model Faithfulness of LLM Rationale | AI Breakdown | Podwise