From 1ff4435d254a77a91393f7c40a6865bd5b50e6d7 Mon Sep 17 00:00:00 2001 From: "Gareth Paul Jones (GPJ)" Date: Mon, 18 Mar 2024 09:36:24 -0700 Subject: [PATCH] Update README with Model Specifications (#27) Added an overview of the model as discussed in response to #14. Adding more info on the the model specs before they proceed to download the checkpoints should help folks ensure they have the necessary resources to effectively utilize Grok-1. --- README.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/README.md b/README.md index 12570d7..8b362bf 100644 --- a/README.md +++ b/README.md @@ -18,9 +18,26 @@ The script loads the checkpoint and samples from the model on a test input. Due to the large size of the model (314B parameters), a machine with enough GPU memory is required to test the model with the example code. The implementation of the MoE layer in this repository is not efficient. The implementation was chosen to avoid the need for custom kernels to validate the correctness of the model. +# Model Specifications + +Grok-1 is currently designed with the following specifications: + +- **Parameters:** 314B +- **Architecture:** Mixture of 8 Experts (MoE) +- **Experts Utilization:** 2 experts used per token +- **Layers:** 64 +- **Attention Heads:** 48 for queries, 8 for keys/values +- **Embedding Size:** 6,144 +- **Tokenization:** SentencePiece tokenizer with 131,072 tokens +- **Additional Features:** + - Rotary embeddings (RoPE) + - Supports activation sharding and 8-bit quantization +- **Maximum Sequence Length (context):** 8,192 tokens + # Downloading the weights You can download the weights using a torrent client and this magnet link: + ``` magnet:?xt=urn:btih:5f96d43576e3d386c9ba65b883210a393b68210e&tr=https%3A%2F%2Facademictorrents.com%2Fannounce.php&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce ```