mirror of
https://github.com/xai-org/grok-1.git
synced 2024-12-25 19:19:53 +03:00
Update README with Model Specifications (#27)
Added an overview of the model as discussed in response to #14. Adding more info on the the model specs before they proceed to download the checkpoints should help folks ensure they have the necessary resources to effectively utilize Grok-1.
This commit is contained in:
parent
b0e77734fe
commit
1ff4435d25
17
README.md
17
README.md
@ -18,9 +18,26 @@ The script loads the checkpoint and samples from the model on a test input.
|
||||
Due to the large size of the model (314B parameters), a machine with enough GPU memory is required to test the model with the example code.
|
||||
The implementation of the MoE layer in this repository is not efficient. The implementation was chosen to avoid the need for custom kernels to validate the correctness of the model.
|
||||
|
||||
# Model Specifications
|
||||
|
||||
Grok-1 is currently designed with the following specifications:
|
||||
|
||||
- **Parameters:** 314B
|
||||
- **Architecture:** Mixture of 8 Experts (MoE)
|
||||
- **Experts Utilization:** 2 experts used per token
|
||||
- **Layers:** 64
|
||||
- **Attention Heads:** 48 for queries, 8 for keys/values
|
||||
- **Embedding Size:** 6,144
|
||||
- **Tokenization:** SentencePiece tokenizer with 131,072 tokens
|
||||
- **Additional Features:**
|
||||
- Rotary embeddings (RoPE)
|
||||
- Supports activation sharding and 8-bit quantization
|
||||
- **Maximum Sequence Length (context):** 8,192 tokens
|
||||
|
||||
# Downloading the weights
|
||||
|
||||
You can download the weights using a torrent client and this magnet link:
|
||||
|
||||
```
|
||||
magnet:?xt=urn:btih:5f96d43576e3d386c9ba65b883210a393b68210e&tr=https%3A%2F%2Facademictorrents.com%2Fannounce.php&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
|
||||
```
|
||||
|
Loading…
Reference in New Issue
Block a user