Compare commits

..

2 Commits

Author SHA1 Message Date
e5ea38bde9 Merge 4e2e30bd6f into d6d9447e2d 2024-03-18 19:50:12 +01:00
4e2e30bd6f Update README.md to have context length higher up
In my original summary of the model specifications, I had put the context length near the bottom, but upon thought, it is probably one of the relevant details to end-users, so it should be higher.

Also, "Additional Features" should be the final bullet point for editorial reasons.
2024-03-18 10:43:55 -07:00
2 changed files with 2 additions and 2 deletions

View File

@ -25,6 +25,7 @@ Grok-1 is currently designed with the following specifications:
- **Parameters:** 314B
- **Architecture:** Mixture of 8 Experts (MoE)
- **Experts Utilization:** 2 experts used per token
- **Maximum Sequence Length (context):** 8,192 tokens
- **Layers:** 64
- **Attention Heads:** 48 for queries, 8 for keys/values
- **Embedding Size:** 6,144
@ -32,7 +33,6 @@ Grok-1 is currently designed with the following specifications:
- **Additional Features:**
- Rotary embeddings (RoPE)
- Supports activation sharding and 8-bit quantization
- **Maximum Sequence Length (context):** 8,192 tokens
# Downloading the weights

View File

@ -1,4 +1,4 @@
dm_haiku==0.0.12
jax[cuda12-pip]==0.4.25 -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
jax[cuda12_pip]==0.4.25 -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
numpy==1.26.4
sentencepiece==0.2.0