diff --git a/README.md b/README.md index 12570d7..8b362bf 100644 --- a/README.md +++ b/README.md @@ -18,9 +18,26 @@ The script loads the checkpoint and samples from the model on a test input. Due to the large size of the model (314B parameters), a machine with enough GPU memory is required to test the model with the example code. The implementation of the MoE layer in this repository is not efficient. The implementation was chosen to avoid the need for custom kernels to validate the correctness of the model. +# Model Specifications + +Grok-1 is currently designed with the following specifications: + +- **Parameters:** 314B +- **Architecture:** Mixture of 8 Experts (MoE) +- **Experts Utilization:** 2 experts used per token +- **Layers:** 64 +- **Attention Heads:** 48 for queries, 8 for keys/values +- **Embedding Size:** 6,144 +- **Tokenization:** SentencePiece tokenizer with 131,072 tokens +- **Additional Features:** + - Rotary embeddings (RoPE) + - Supports activation sharding and 8-bit quantization +- **Maximum Sequence Length (context):** 8,192 tokens + # Downloading the weights You can download the weights using a torrent client and this magnet link: + ``` magnet:?xt=urn:btih:5f96d43576e3d386c9ba65b883210a393b68210e&tr=https%3A%2F%2Facademictorrents.com%2Fannounce.php&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce ```