Update README.md

This commit is contained in:
ClumsyLulz 2024-03-21 09:20:26 -07:00 committed by GitHub
parent 7050ed204b
commit e54ab18216
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -1,15 +1,31 @@
# Grok-1
██████╗ ██████╗ ██████╗ ██╗ ██╗
██╔════╝ ██╔══██╗██╔═══██╗██║ ██╔╝
██║ ███╗██████╔╝██║ ██║█████╔╝
██║ ██║██╔══██╗██║ ██║██╔═██╗
╚██████╔╝██║ ██║╚██████╔╝██║ ██╗
╚═════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝
This repository contains JAX example code for loading and running the Grok-1 open-weights model.
Make sure to download the checkpoint and place the `ckpt-0` directory in `checkpoints` - see [Downloading the weights](#downloading-the-weights)
Make sure to download the checkpoint and place the ckpt-0 directory in checkpoints - see Downloading the weights (#downloading-the-weights)
Then, run
```shell
pip install -r requirements.txt
python run.py
```
shell
sudo apt update
sudo apt install python3 python3-pip git curl
git clone https://github.com/xai-org/grok.git
cd grok
sudo pip3 install -r requirements.txt
cd grok
sudo pip3 install -r requirements.txt
chown +x grok
sudo apt install dos2unix
dos2unix grok
sudo apt update
./grok
to test the code.
@ -18,39 +34,31 @@ The script loads the checkpoint and samples from the model on a test input.
Due to the large size of the model (314B parameters), a machine with enough GPU memory is required to test the model with the example code.
The implementation of the MoE layer in this repository is not efficient. The implementation was chosen to avoid the need for custom kernels to validate the correctness of the model.
# Model Specifications
Model Specifications
Grok-1 is currently designed with the following specifications:
- **Parameters:** 314B
- **Architecture:** Mixture of 8 Experts (MoE)
- **Experts Utilization:** 2 experts used per token
- **Layers:** 64
- **Attention Heads:** 48 for queries, 8 for keys/values
- **Embedding Size:** 6,144
- **Tokenization:** SentencePiece tokenizer with 131,072 tokens
- **Additional Features:**
- Rotary embeddings (RoPE)
- Supports activation sharding and 8-bit quantization
- **Maximum Sequence Length (context):** 8,192 tokens
# Downloading the weights
Parameters: 314B
Architecture: Mixture of 8 Experts (MoE)
Experts Utilization: 2 experts used per token
Layers: 64
Attention Heads: 48 for queries, 8 for keys/values
Embedding Size: 6,144
Tokenization: SentencePiece tokenizer with 131,072 tokens
Additional Features:
Rotary embeddings (RoPE)
Supports activation sharding and 8-bit quantization
Maximum Sequence Length (context): 8,192 tokens
Downloading the weights
You can download the weights using a torrent client and this magnet link:
```
magnet:?xt=urn:btih:5f96d43576e3d386c9ba65b883210a393b68210e&tr=https%3A%2F%2Facademictorrents.com%2Fannounce.php&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
```
or directly using [HuggingFace 🤗 Hub](https://huggingface.co/xai-org/grok-1):
```
or directly using HuggingFace Hub (https://huggingface.co/xai-org/grok-1):
git clone https://github.com/xai-org/grok-1.git && cd grok-1
pip install huggingface_hub[hf_transfer]
huggingface-cli download xai-org/grok-1 --repo-type model --include ckpt-0/* --local-dir checkpoints --local-dir-use-symlinks False
```
# License
The code and associated Grok-1 weights in this release are licensed under the
Apache 2.0 license. The license only applies to the source files in this
License
The code and associated Grok-1 weights in this release are licensed under the Apache 2.0 license. The license only applies to the source files in this
repository and the model weights of Grok-1.