fix conflicts in README.md

This commit is contained in:
BIEMAX 2024-03-18 21:39:40 -03:00
parent adacffe69f
commit 7fead60864

View File

@ -2,16 +2,9 @@
This repository contains JAX example code for loading and running the Grok-1 open-weights model.
Make sure to download the checkpoint and place `ckpt-0` directory in `checkpoints` before run the project.
Make sure to download the checkpoint and place the `ckpt-0` directory in `checkpoints` - see [Downloading the weights](#downloading-the-weights)
## 1. Downloading the weights
You can download the weights using a torrent client in the following magnet link:
```
magnet:?xt=urn:btih:5f96d43576e3d386c9ba65b883210a393b68210e&tr=https%3A%2F%2Facademictorrents.com%2Fannounce.php&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
```
## 2. Installation
## 1. Installation
1. Install the project dependencies
@ -27,16 +20,41 @@ python run.py
The script loads the checkpoint and samples from the model on a test input.
Due to the large size of the model (314B/Billion parameters), a machine with enough GPU memory is required to test the model with the example code.
Due to the large size of the model (314 Billion parameters), a machine with enough GPU memory is required to test the model with the example code.
The implementation of the MoE layer in this repository is not efficient. The implementation was chosen to avoid the need for custom kernels to validate the correctness of the model.
## 3. Requirements
## 2. Model Specifications
Make sure to attend the following requirements before run the project:
Grok-1 is currently designed with the following specifications:
- Needs either either a TPU or GPU (NVIDIA/AMD supported only)
- They have to be 8 devices (in the context of TPUs (Tensor Processing Units) or GPUs (Graphics Processing Units), they are typically talking about having access to a total of 8 individual processing units)
- **Parameters:** 314B
- **Architecture:** Mixture of 8 Experts (MoE)
- **Experts Utilization:** 2 experts used per token
- **Layers:** 64
- **Attention Heads:** 48 for queries, 8 for keys/values
- **Embedding Size:** 6,144
- **Tokenization:** SentencePiece tokenizer with 131,072 tokens
- **Additional Features:**
- Rotary embeddings (RoPE)
- Supports activation sharding and 8-bit quantization
- **Maximum Sequence Length (context):** 8,192 tokens
- **TPU/GPU:** NVIDIA/AMD supported only
## 4. Downloading the weights
You can download the weights using a torrent client and this magnet link:
```
magnet:?xt=urn:btih:5f96d43576e3d386c9ba65b883210a393b68210e&tr=https%3A%2F%2Facademictorrents.com%2Fannounce.php&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
```
or directly using [HuggingFace 🤗 Hub](https://huggingface.co/xai-org/grok-1):
```
git clone https://github.com/xai-org/grok-1.git && cd grok-1
pip install huggingface_hub[hf_transfer]
huggingface-cli download xai-org/grok-1 --repo-type model --include ckpt-0/* --local-dir checkpoints --local-dir-use-symlinks False
```
# License