Qwen1.5 72B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
- Significant performance improvement in human preference for chat models
- Multilingual support of both base and chat models
- Stable support of 32K context length for models of all sizes
For more details, see this blog post(opens in new tab) and GitHub repo(opens in new tab).
Usage of this model is subject to Tongyi Qianwen LICENSE AGREEMENT(opens in new tab).