Meta details Llama 3: 8B and 70B parameters models and a focus on reducing false refusals; an upcoming large model trained on 15T tokens and has 400B parameters (Alex Heath/The Verge) | Techmeme | Podwise