arxiv preprint - Layer-Condensed KV Cache for Efficient Inference of Large Language Models | AI Breakdown | Podwise