Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free | Xiaol.x | Podwise