
About Apertus
EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS) has released Apertus, Switzerland’s first large-scale open, multilingual language model — a milestone in generative AI for transparency and diversity. Trained on 15 trillion tokens across more than 1,000 languages – 40% of the data is non-English – Apertus includes many languages that have so far been underrepresented in LLMs, such as Swiss German, Romansh, and many others. Apertus serves as a building block for developers and organizations for future applications such as chatbots, translation systems, or educational tools.
The model is named Apertus – Latin for “open” – highlighting its distinctive feature: the entire development process, including its architecture, model weights, and training data and recipes, is openly accessible and fully documented.
How to access Apertus
AI researchers, professionals, and experienced enthusiasts can either access the model through the strategic partner Swisscom or download it from Hugging Face – a platform for AI models and applications – and deploy it for their own projects.
Apertus is freely available in two sizes – featuring 8 billion and 70 billion parameters, the smaller model being more appropriate for individual usage. Both models are released under a permissive open-source license, allowing use in education and research as well as broad societal and commercial applications.
While setting up Apertus is straightforward for professionals and proficient users, additional components such as servers, cloud infrastructure or specific user interfaces are required for practical use.​
​
-
The models are available here for download: Apertus LLM Collection
-
The model is also accessible via PublicAI here: https://publicai.co/
​
Deployment of the models is supported via the newest versions of transformers, vLLM or SGLang.
Transparency and Compliance
Apertus is designed with transparency at its core, thereby ensuring full reproducibility of the training process. Alongside the models, the research team has published a range of resources: comprehensive documentation and source code of the training process and datasets used, model weights including intermediate checkpoints – all released under a permissive open-source license, which also allows for commercial use. The terms and conditions are available via Hugging Face.
​
Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. Particular attention has been paid to data integrity and ethical standards: the training corpus builds only on data which is publicly available. It is filtered to respect machine-readable opt-out requests from websites, even retroactively, and to remove personal data, and other undesired content before training begins.
Apertus and the Swiss AI Initiative
Apertus was developed as part of the Swiss AI Initiative, led by EPFL and ETH Zurich. It is the result of a collaborative effort bringing together researchers, engineers, and students from across Switzerland, alongside the engineers and infrastructure of the Swiss National Supercomputing Centre (CSCS). This collective expertise, spanning multiple institutions and disciplines, has made the development of Apertus possible.