Want to deploy large models locally to save money and protect data privacy? That's a great idea!
But diving into the world of models, the various parameters and models can be overwhelming: 7B, 14B, 32B, 70B... The same model has so many parameters, which one should I choose?
And what level is my computer, which one can it run?
Don't panic! This article will help you clarify your thinking and tell you in the simplest way how to choose the hardware for deploying large models locally! I guarantee you won't be confused after reading it!
There is a
Hardware Configuration and Model Size Reference Table
at the bottom of this article.
Understanding Large Model Parameters: What Do 7B, 14B, 32B Represent?
- The Meaning of Parameters: The numbers 7B, 14B, 32B represent the number of parameters in a large language model (LLM), where "B" is an abbreviation for Billion. Parameters can be thought of as the "weights" learned by the model during training, which store the model's understanding of language, knowledge, and patterns.
- Number of Parameters and Model Capabilities: Generally speaking, the more parameters a model has, the more complex the model is. In theory, it can learn and store richer information, thereby capturing more complex language patterns and performing more powerfully in understanding and generating text.
- Resource Consumption and Model Size: Models with more parameters also mean that more computing resources (GPU computing power), more memory (VRAM and system memory RAM), and more data are required for training and running.
- Small Models vs. Large Models:
- Large Models (such as 32B, 65B, or even larger): Can handle more complex tasks, generate more coherent and nuanced text, and may perform better in knowledge Q&A, creative writing, etc. However, they have high hardware requirements and run relatively slowly.
- Small Models (such as 7B, 13B): Consume fewer resources and run faster, making them more suitable for running on devices with limited resources or in application scenarios that are sensitive to latency. Small models can also perform well on some simple tasks.
- The Trade-Off of Choice: When choosing a model size, you need to weigh the model's capabilities against the hardware resources. More parameters are not necessarily "better". You need to choose the most suitable model based on the actual application scenario and hardware conditions.
What Kind of Hardware Do I Need to Run a Local Model?
Core Requirement: Video RAM (VRAM)
- The Importance of VRAM: When running large models, the model's parameters and intermediate calculation results need to be loaded into the video memory. Therefore, the size of the video memory is the most critical hardware indicator for running local large models. Insufficient video memory will cause the model to fail to load, or only very small models can be used, or even seriously reduce the running speed.
- The Bigger, the Better: Ideally, it is best to have a GPU with as much video memory as possible, so that you can run models with larger parameters and get better performance.
Secondly Important: System Memory (RAM)
- The Role of RAM: System memory RAM is used to load the operating system, run programs, and as a supplement to video memory. When video memory is insufficient, system RAM can be used as an "overflow" space, but the speed will be much slower (because RAM is much slower than VRAM), and the model running efficiency will be significantly reduced.
- Enough RAM is also Important: It is recommended to have at least 16GB or even 32GB or more of system RAM, especially when your GPU video memory is limited. Larger RAM can help relieve video memory pressure.
Processor (CPU)
- The Role of the CPU: The CPU is mainly responsible for data preprocessing, model loading, and some model computing tasks (especially in the case of CPU offloading). A CPU with good performance can improve the model loading speed and assist the GPU in calculations to a certain extent.
- NPU (Neural Network Processor): Some laptops are equipped with an NPU (Neural Processing Unit), which is a hardware specially used to accelerate AI calculations. The NPU can accelerate specific types of AI operations, including the inference process of some large models, thereby improving efficiency and reducing power consumption. If your laptop has an NPU, it will be a bonus, but the GPU is still the core of running local large models. The support and effect of the NPU depends on the specific model and software.
Storage (Hard Disk/SSD)
- The Role of Storage: You need enough hard disk space to store model files. The files of large models are usually very large. For example, a quantized 7B model may require 4-5GB of space, and larger models require tens or even hundreds of GB of space.
- SSD is Better than HDD: Using a solid-state drive (SSD) instead of a mechanical hard drive (HDD) can significantly speed up model loading.
Hardware Priority
- Video RAM (VRAM) (Most Important)
- System Memory (RAM) (Important)
- GPU Performance (Computing Power) (Important)
- CPU Performance (Auxiliary Role)
- Storage Speed (SSD is Better than HDD)
What if I Don't Have a Dedicated GPU?
- Run with Integrated Graphics and CPU: If you don't have a dedicated GPU, you can still use integrated graphics (such as Intel Iris Xe) or rely entirely on the CPU to run the model. However, the performance will be greatly limited. It is recommended to focus on running 7B or even smaller, highly optimized models and use technologies such as quantization to reduce resource requirements.
- Cloud Services: If you need to run large models but your local hardware is insufficient, you can consider using cloud GPU services, such as Google Colab, AWS SageMaker, RunPod, etc.
How to Run Local Models?
For beginners, it is recommended to use some user-friendly tools that simplify the process of running local models:
- Ollama: Operated through the command line, but very simple to install and use, focusing on running models quickly.
- LM Studio: The interface is simple and intuitive, supporting model download, model management, and one-click running.
Hardware Configuration and Model Size Reference Table
Slide left and right to see the whole thing
X86 Laptops | ||||
---|---|---|---|---|
Integrated Graphics Laptop (e.g., Intel Iris Xe) | Shared System Memory (8GB+ RAM) | 8-bit, or even 4-bit Quantization | ≤ 7B (Extreme Quantization) | * Very basic local running experience, suitable for learning and light experience. * Limited performance, slow reasoning speed. * It is recommended to use 4-bit or lower precision quantization models to reduce video memory usage as much as possible. * Suitable for running small models, such as TinyLlama, etc. |
Entry-Level Gaming Laptop/Thin and Light Dedicated Graphics (e.g., RTX 3050/4050) | 4-8 GB VRAM + 16GB+ RAM | 4-bit - 8-bit Quantization | 7B - 13B (Quantization) | * 7B models can be run relatively smoothly, and some 13B models can also be run through quantization and optimization. * Suitable for experiencing some mainstream small and medium-sized models. * Note that VRAM is still limited, and running large models will be more difficult. |
Mid-to-High-End Gaming Laptop/Mobile Workstation (e.g., RTX 3060/3070/4060) | 8-16 GB VRAM + 16GB+ RAM | 4-bit - 16-bit (Flexible Selection) | 7B - 30B (Quantization) | * 7B and 13B models can be run more comfortably, and there is potential to try models around 30B (requiring good quantization and optimization). * Different quantization precisions can be selected according to needs to achieve a balance between performance and model quality. * Suitable for exploring more types of medium and large models. |
ARM (Apple M Series) | ||||
---|---|---|---|---|
Raspberry Pi 4/5 | 4-8 GB RAM | 4-bit Quantization (or Lower) | ≤ 7B (Extreme Quantization) | * Limited by memory and computing power, it is mainly used to run extremely small models or as an experimental platform. * Suitable for researching model quantization and optimization technologies. |
Apple M1/M2/M3 (Unified Memory) | 8GB - 64GB Unified Memory | 4-bit - 16-bit (Flexible Selection) | 7B - 30B+ (Quantization) | * The unified memory architecture makes memory utilization more efficient. Even an M series Mac with 8GB of memory can run models of a certain size. * Higher memory versions (16GB+) can run larger models, and even try models above 30B. * Apple chips have advantages in terms of energy efficiency. |
Nvidia GPU Computer | ||||
---|---|---|---|---|
Entry-Level Dedicated Graphics (e.g., RTX 4060/4060Ti) | 8-16 GB VRAM | 4-bit - 16-bit (Flexible Selection) | 7B - 30B (Quantization) | * Close to the performance of mid-to-high-end gaming notebooks, but desktop computers have better heat dissipation and can run stably for a long time. * High cost performance, suitable for entry-level local LLM players. |
Mid-Range Dedicated Graphics (e.g., RTX 4070/4070Ti/4080) | 12-16 GB VRAM | 4-bit - 16-bit (Flexible Selection) | 7B - 30B+ (Quantization) | * Can run medium and large models more smoothly and has the potential to try models with larger parameters. * Suitable for users who have higher requirements for local LLM experience. |
High-End Dedicated Graphics (e.g., RTX 3090/4090, RTX 6000 Ada) | 24-48 GB VRAM | 8-bit - 32-bit (or even higher) | 7B - 70B+ (Quantization/Native) | * Can run most open source LLMs, including large models (such as 65B, 70B). * You can try higher bit precision (such as 16-bit, 32-bit) to obtain the best model quality, or use quantization to run larger models. * Suitable for professional developers, researchers, and heavy LLM users. |
Server-Level GPU (e.g., A100, H100, A800, H800) | 40GB - 80GB+ VRAM | 16-bit - 32-bit (Native Precision) | 30B - 175B+ (Native/Quantization) | * Designed for AI computing, with ultra-large video memory and extremely strong computing power. * Can run ultra-large models, and even perform model training and fine-tuning. * Suitable for enterprise-level applications, large-scale model deployment, and research institutions. |
Additional Table Notes
- Quantization: Refers to reducing the numerical precision of model parameters, for example, from 16-bit floating-point (float16) to 8-bit integer (int8) or 4-bit integer (int4). Quantization can significantly reduce model size and video memory usage, and speed up reasoning, but it may slightly reduce model accuracy.
- Extreme Quantization: Refers to using very low bit precision quantization, such as 3-bit or 2-bit. It can further reduce resource requirements, but the decline in model quality may be more obvious.
- Native: Refers to the model running at its original precision, such as float16 or bfloat16. The best model quality can be obtained, but the resource requirements are the highest.
- Quantized Parameter Range: The "Recommended LLM Parameter Range (after Quantization)" in the table refers to the model parameter range that the hardware can run smoothly under the premise of reasonable quantization. The actual model size and performance that can be run also depend on the specific model architecture, degree of quantization, software optimization, and other factors. The parameter range given here is for reference only.
- Unified Memory: A feature of Apple Silicon chips, where the CPU and GPU share the same physical memory, resulting in higher data exchange efficiency.