type
status
date
slug
summary
tags
category
icon
password
😀
部署方式渐渐从各种方式改到了 VLLM。 lmdeploy 不支持 embedding sglang 的 feature 跟 VLLM 也是互相借鉴 还是全面投靠 VLLM 吧,只用打一个镜像就可以全面部署了
 
VLLM serve 命令详解

1. 服务配置参数

  • -host :服务绑定的主机名。
  • -port :服务绑定的端口号,默认为 8000
  • -uvicorn-log-level :设置 Uvicorn 的日志级别,可选值为 debuginfowarningerrorcriticaltrace,默认为 info
  • -allow-credentials :是否允许跨域请求携带凭据(如 cookies),默认为 False
  • -allowed-origins :允许的跨域请求来源,默认为 ['*']
  • -allowed-methods :允许的 HTTP 方法,默认为 ['*']
  • -allowed-headers :允许的 HTTP 头部,默认为 ['*']
  • -api-key :如果提供,服务器将要求在请求头中包含此 API 密钥。
  • -ssl-keyfile :指定 SSL 私钥文件的路径。
  • -ssl-certfile :指定 SSL 证书文件的路径。
  • -ssl-ca-certs :指定 CA 证书文件的路径。
  • -ssl-cert-reqs :设置客户端证书验证级别(如 CERT_NONECERT_OPTIONALCERT_REQUIRED),默认为 0
  • -root-path :设置 FastAPI 的根路径前缀(用于路径路由代理)。
  • -middleware :添加额外的 ASGI 中间件,支持多个 -middleware 参数,值应为导入路径。
  • -disable-fastapi-docs :禁用 FastAPI 自动生成的 OpenAPI 文档和 UI。

2. 模型加载与初始化

  • -model :指定要加载的 HuggingFace 模型名称或路径,默认为 facebook/opt-125m
  • -task :指定模型的任务类型,可选值为 autogenerateembeddingembedclassifyscorerewardtranscription,默认为 auto
  • -tokenizer :指定 HuggingFace 分词器的名称或路径,未指定时默认使用与模型相同的名称或路径。
  • -skip-tokenizer-init :跳过分词器初始化,默认为 False
  • -revision :指定模型的版本(如 Git 提交哈希、分支名或标签名)。
  • -code-revision :指定模型代码的版本。
  • -tokenizer-revision :指定分词器的版本。
  • -tokenizer-mode :设置分词器模式,可选值为 autoslowmistralcustom,默认为 auto
  • -trust-remote-code :是否信任并加载远程代码(如自定义模型),默认为 False
  • -allowed-local-media-path :允许 API 请求从服务器文件系统读取本地图片或视频(需在受信任环境中启用)。
  • -download-dir :设置模型下载的缓存目录。
  • -load-format :指定模型权重加载格式,可选值为 autoptsafetensorsnpcachedummytensorizersharded_stateggufbitsandbytesmistralrunai_streamer,默认为 auto
  • -config-format :指定模型配置的格式,可选值为 autohfmistral,默认为 auto
  • -dtype :设置模型权重和激活的数据类型,可选值为 autohalffloat16bfloat16floatfloat32,默认为 auto
  • -kv-cache-dtype :设置 KV 缓存的数据类型,可选值为 autofp8fp8_e5m2fp8_e4m3,默认为 auto
  • -max-model-len :设置模型的最大上下文长度,未指定时从模型配置中自动推导。
  • -model-impl :指定模型的实现方式,可选值为 autovllmtransformers,默认为 auto

3. 推理与生成

  • -guided-decoding-backend :指定基于规则的解码引擎,可选值为 outlineslm-format-enforcerxgrammar,默认为 xgrammar
  • -logits-processor-pattern :指定允许的 logits 处理器模式(正则表达式)。
  • -max-seq-len-to-capture :设置 CUDA 图覆盖的最大序列长度。
  • -quantization :指定量化方法,可选值为 aqlmawqdeepspeedfptpu_int8fp8ptpc_fp8fbgemm_fp8modeloptmarlinggufgptq_marlin_24gptq_marlinawq_marlingptqcompressed-tensorsbitsandbytesqqqhqqexperts_int8neuron_quantipexquarkmoe_wna16None
  • -rope-scaling :设置 RoPE 旋转位置编码的缩放配置(JSON 格式)。
  • -rope-theta :设置 RoPE 的基础频率。
  • -hf-overrides :覆盖 HuggingFace 配置的特定参数(JSON 格式)。
  • -enforce-eager :强制使用 PyTorch 的 eager 模式,默认为 False
  • -disable-custom-all-reduce :禁用自定义的 all-reduce 操作,默认为 False

4. 分布式与并行

  • -distributed-executor-backend :指定分布式执行后端,可选值为 raympuniexternal_launcher,默认为 ray
  • -pipeline-parallel-size :设置流水线并行的规模,默认为 1
  • -tensor-parallel-size :设置张量并行的规模,默认为 1
  • -max-parallel-loading-workers :设置最大并行加载工作线程数。
  • -ray-workers-use-nsight :是否使用 Nsight 性能分析工具监控 Ray worker,默认为 False
  • -block-size :设置令牌块的大小,可选值为 8163264128,默认为 16

5. 资源管理

  • -swap-space :设置每个 GPU 的 CPU 交换空间大小(GiB),默认为 4
  • -cpu-offload-gb :设置每个 GPU 的 CPU 卸载内存大小(GiB),默认为 0
  • -gpu-memory-utilization :设置每个 GPU 的内存利用率(0-1),默认为 0.9
  • -num-gpu-blocks-override :覆盖 GPU 块的数量(用于测试抢占)。
  • -max-num-batched-tokens :设置每次迭代的最大批处理令牌数。
  • -max-num-seqs :设置每次迭代的最大序列数。
  • -max-logprobs :设置返回的最大 logprobs 数量。
  • -max-model-len :模型上下文长度。如果未指定,将自动从模型配置中推导得出。

6. LoRA 与提示适配器

  • -lora-modules :指定 LoRA 模块的配置(name=path 或 JSON 格式)。
  • -prompt-adapters :指定提示适配器的配置(name=path 格式)。
  • -chat-template :指定聊天模板的文件路径或单行格式。
  • -chat-template-content-format :设置聊天模板内容的渲染格式,可选值为 autostringopenai,默认为 auto
  • -response-role :设置返回的响应角色,默认为 assistant
  • -enable-lora :是否启用 LoRA 支持,默认为 False
  • -max-loras :设置单批次中最大 LoRA 数量,默认为 1
  • -max-lora-rank :设置最大 LoRA 秩,默认为 16

7. 调度与流式输出

  • -num-scheduler-steps :设置每次调度的最大前向步数,默认为 1
  • -multi-step-stream-outputs :是否在多步中流式输出结果,默认为 True
  • -scheduler-delay-factor :在调度下一个提示之前应用延迟因子(乘以前一个提示的延迟时间),默认为 0.0
  • -enable-chunked-prefill :是否启用分块预填充(基于 max_num_batched_tokens),默认为 False
  • -scheduling-policy :设置调度策略,可选值为 fcfs(先到先服务)或 priority(优先级),默认为 fcfs
  • -scheduler-cls :指定调度器类,默认为 vllm.core.scheduler.Scheduler

8. 推测式解码

  • -speculative-model :指定用于推测式解码的草稿模型名称。
  • -speculative-model-quantization :指定草稿模型的量化方法,可选值与 -quantization 相同。
  • -num-speculative-tokens :设置从草稿模型中采样的推测令牌数量。
  • -speculative-disable-mqa-scorer :是否在推测式解码中禁用 MQA 评分器,默认为 False
  • -speculative-draft-tensor-parallel-size :设置草稿模型的张量并行规模。
  • -speculative-max-model-len :指定草稿模型支持的最大序列长度。
  • -speculative-disable-by-batch-size :如果新请求的排队数量超过此值,则禁用推测式解码。
  • -ngram-prompt-lookup-max :设置推测式解码中 n-gram 提示查找的最大窗口大小。
  • -ngram-prompt-lookup-min :设置推测式解码中 n-gram 提示查找的最小窗口大小。
  • -spec-decoding-acceptance-method :设置推测式解码中的令牌接受方法,可选值为 rejection_sampler 或 typical_acceptance_sampler,默认为 rejection_sampler
  • -typical-acceptance-sampler-posterior-threshold :设置典型接受采样器的后验概率下界阈值,默认为 0.09
  • -typical-acceptance-sampler-posterior-alpha :设置典型接受采样器的熵阈值缩放因子,默认为 0.3
  • -disable-logprobs-during-spec-decoding :是否在推测式解码中禁用 logprobs 计算,默认为 True

9. 多模态支持

  • -limit-mm-per-prompt :设置每个提示允许的多模态输入实例数量(如 image=16,video=2)。
  • -mm-processor-kwargs :覆盖多模态输入处理器(如图像处理器)的配置(JSON 格式)。
  • -disable-mm-preprocessor-cache :是否禁用多模态预处理器的缓存,默认为 False

10. 高级配置

  • -model-loader-extra-config :指定模型加载器的额外配置(JSON 格式)。
  • -ignore-patterns :设置加载模型时忽略的模式(如 original/**/*),默认为空列表。
  • -preemption-mode :设置抢占模式(如 recompute 或 swap)。
  • -served-model-name :指定 API 中使用的模型名称(支持多个名称)。
  • -qlora-adapter-name-or-path :指定 QLoRA 适配器的名称或路径。
  • -otlp-traces-endpoint :指定 OpenTelemetry 跟踪数据的目标 URL。
  • -collect-detailed-traces :设置是否收集详细跟踪数据(如 modelworkerall)。
  • -disable-async-output-proc :是否禁用异步输出处理,默认为 False
  • -override-neuron-config :覆盖或设置 Neuron 设备的配置(JSON 格式)。
  • -override-pooler-config :覆盖或设置池化模型的池化方法(JSON 格式)。
  • -compilation-config :设置模型的 torch.compile 配置(优化级别或 JSON 格式)。
  • -kv-transfer-config :设置分布式 KV 缓存传输的配置(JSON 格式)。
  • -worker-cls :指定分布式执行的 worker 类,默认为 auto
  • -generation-config :指定生成配置的文件夹路径(如 auto 或自定义路径)。
  • -override-generation-config :覆盖或设置生成配置(JSON 格式)。
  • -enable-sleep-mode :是否启用引擎的睡眠模式(仅支持 CUDA 平台),默认为 False
  • -calculate-kv-scales :是否动态计算 k_scale 和 v_scale(当 kv-cache-dtype 为 fp8 时),默认为 False
  • -additional-config :指定平台的额外配置(JSON 格式)。

11. 日志与调试

  • -disable-log-stats :是否禁用日志统计信息,默认为 False
  • -disable-log-requests :是否禁用请求日志,默认为 False
  • -max-log-len :设置日志中最大打印的提示字符数或提示 ID 数。
  • -return-tokens-as-token-ids :是否将单个令牌表示为 token_id:{token_id} 的字符串格式,默认为 False
  • -enable-prompt-tokens-details :是否在 usage 中返回提示令牌的详细信息,默认为 False

12. 其他

  • -disable-frontend-multiprocessing:如果启用,前端服务器将与模型服务引擎运行在同一个进程中,默认为 False
  • -enable-request-id-headers:如果启用,API 服务器将在响应头中添加 X-Request-Id注意:在高 QPS 下可能影响性能,默认为 False
  • -enable-auto-tool-choice:为支持的模型启用自动工具选择,需通过 -tool-call-parser 指定解析器,默认为 False
  • -enable-reasoning:是否启用模型的 reasoning_content 功能,如果启用,模型将能够生成推理内容,默认为 False
  • -reasoning-parser:根据使用的模型选择推理解析器,用于将推理内容解析为 OpenAI API 格式,需配合 -enable-reasoning 使用。
  • -tool-call-parser:根据使用的模型选择工具调用解析器,用于将模型生成的工具调用解析为 OpenAI API 格式,需配合 -enable-auto-tool-choice 使用。
  • -tool-parser-plugin:指定工具解析器插件的名称,用于解析模型生成的工具调用并转换为 OpenAI API 格式,插件名称需在 -tool-call-parser 中注册,默认为空字符串。
  • -tokenizer-pool-size:设置异步分词器的池大小,如果为 0,则使用同步分词,默认为 0
  • -tokenizer-pool-type:设置异步分词器的池类型(如 ray),如果 -tokenizer-pool-size0,则忽略此参数,默认为 ray
  • -tokenizer-pool-extra-config:设置异步分词器池的额外配置(JSON 格式),如果 -tokenizer-pool-size0,则忽略此参数。
  • -use-v2-block-manager[已弃用] 此参数不再影响 vLLM 的行为,因为自注意力块管理器(即块管理器 v2)现在是默认实现,默认为 True
  • -num-lookahead-slots:实验性调度配置,用于推测式解码,未来将被推测式解码配置替代,默认为 0
  • -long-prefill-token-threshold:对于分块预填充,如果提示长度超过此值,则被视为长提示,默认为模型上下文长度的 4%,默认为 0
  • -max-num-partial-prefills:设置分块预填充的最大并发部分预填充数,默认为 1
  • -max-long-partial-prefills:设置分块预填充中长提示的最大并发预填充数,如果小于 -max-num-partial-prefills,则允许短提示插队,默认为 1
  • -speculative-disable-by-batch-size:如果新请求的排队数量超过此值,则禁用推测式解码。
  • -speculative-disable-mqa-scorer:是否在推测式解码中禁用 MQA(多查询注意力)评分器,默认为 False
  • -speculative-draft-tensor-parallel-size:指定草稿模型的张量并行规模。
  • -speculative-max-model-len:指定草稿模型支持的最大序列长度,如果序列超过此长度,则跳过推测式解码。
  • -speculative-disable-by-batch-size:如果新请求的排队数量超过此值,则禁用推测式解码。
  • -ngram-prompt-lookup-max:设置推测式解码中 n-gram 提示查找的最大窗口大小。
  • -ngram-prompt-lookup-min:设置推测式解码中 n-gram 提示查找的最小窗口大小。
  • -spec-decoding-acceptance-method:设置推测式解码中的令牌接受方法,可选值为 rejection_samplertypical_acceptance_sampler,默认为 rejection_sampler
  • -typical-acceptance-sampler-posterior-threshold:设置典型接受采样器的后验概率下界阈值,默认为 0.09
  • -typical-acceptance-sampler-posterior-alpha:设置典型接受采样器的熵阈值缩放因子,默认为 0.3
  • -disable-logprobs-during-spec-decoding:是否在推测式解码中禁用 logprobs 计算,默认为 True

常规语言模型(Qwen2.5-72B-Instruct)

  • 启动脚本
  • crul 测试

推理模型(R1-Distill-Qwen-32B)

  • crul 测试

多模态模型(Qwen2.5-VL-72B-Instruct)

  • crul 测试

embedding 模型(bce-embedding-base)

  • crul 测试

rerank 模型(bce-reranker-base)

  • crul 测试

usage: vllm serve [model_tag] [options]
Start the vLLM OpenAI Compatible API server.
positional arguments: model_tag The model tag to serve (optional if specified in config) (default: None)
options: --allow-credentials Allow credentials. (default: False) --allowed-headers ALLOWED_HEADERS Allowed headers. (default: ['']) --allowed-methods ALLOWED_METHODS Allowed methods. (default: ['']) --allowed-origins ALLOWED_ORIGINS Allowed origins. (default: ['*']) --api-key API_KEY If provided, the server will require this key to be presented in the header. (default: None) --chat-template CHAT_TEMPLATE The file path to the chat template, or the template in single-line form for the specified model. (default: None) --chat-template-content-format {auto,string,openai} The format to render message content within a chat template. * "string" will render the content as a string. Example: "Hello World" * "openai" will render the content as a list of dictionaries, similar to OpenAI schema. Example: [{"type": "text", "text": "Hello world!"}] (default: auto) --config CONFIG Read CLI options from a config file.Must be a YAML with the following options:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#cli-reference (default: ) --data-parallel-start-rank DATA_PARALLEL_START_RANK, -dpr DATA_PARALLEL_START_RANK Starting data parallel rank for secondary nodes. (default: 0) --disable-fastapi-docs Disable FastAPI's OpenAPI schema, Swagger UI, and ReDoc endpoint. (default: False) --disable-frontend-multiprocessing If specified, will run the OpenAI frontend server in the same process as the model serving engine. (default: False) --disable-log-requests Disable logging requests. (default: False) --disable-log-stats Disable logging statistics. (default: False) --disable-uvicorn-access-log Disable uvicorn access log. (default: False) --enable-auto-tool-choice Enable auto tool choice for supported models. Use --tool-call-parser to specify which parser to use. (default: False) --enable-prompt-tokens-details If set to True, enable prompt_tokens_details in usage. (default: False) --enable-request-id-headers If specified, API server will add X-Request-Id header to responses. Caution: this hurts performance at high QPS. (default: False) --enable-server-load-tracking If set to True, enable tracking server_load_metrics in the app state. (default: False) --enable-ssl-refresh Refresh SSL Context when SSL certificate files change (default: False) --headless Run in headless mode. See multi-node data parallel documentation for more details. (default: False) --host HOST Host name. (default: None) --lora-modules LORA_MODULES [LORA_MODULES ...] LoRA module configurations in either 'name=path' formator JSON format. Example (old format): 'name=path' Example (new format): {"name": "name", "path": "lora_path", "base_model_name": "id"} (default: None) --max-log-len MAX_LOG_LEN Max number of prompt characters or prompt ID numbers being printed in log. The default of None means unlimited. (default: None) --middleware MIDDLEWARE Additional ASGI middleware to apply to the app. We accept multiple --middleware arguments. The value should be an import path. If a function is provided, vLLM will add it to the server using @app.middleware('http'). If a class is provided, vLLM will add it to the server using app.add_middleware(). (default: []) --port PORT Port number. (default: 8000) --prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...] Prompt adapter configurations in the format name=path. Multiple adapters can be specified. (default: None) --response-role RESPONSE_ROLE The role name to return if request.add_generation_prompt=true. (default: assistant) --return-tokens-as-token-ids When --max-logprobs is specified, represents single tokens as strings of the form 'token_id:{token_id}' so that tokens that are not JSON-encodable can be identified. (default: False) --root-path ROOT_PATH FastAPI root_path when app is behind a path based routing proxy. (default: None) --ssl-ca-certs SSL_CA_CERTS The CA certificates file. (default: None) --ssl-cert-reqs SSL_CERT_REQS Whether client certificate is required (see stdlib ssl module's). (default: 0) --ssl-certfile SSL_CERTFILE The file path to the SSL cert file. (default: None) --ssl-keyfile SSL_KEYFILE The file path to the SSL key file. (default: None) --tool-call-parser {deepseek_v3,granite-20b-fc,granite,hermes,internlm,jamba,llama4_pythonic,llama4_json,llama3_json,mistral,phi4_mini_json,pythonic} or name registered in --tool-parser-plugin Select the tool call parser depending on the model that you're using. This is used to parse the model-generated tool call into OpenAI API format. Required for --enable-auto-tool-choice. (default: None) --tool-parser-plugin TOOL_PARSER_PLUGIN Special the tool parser plugin write to parse the model-generated tool into OpenAI API format, the name register in this plugin can be used in --tool-call-parser. (default: ) --use-v2-block-manager [DEPRECATED] block manager v1 has been removed and SelfAttnBlockSpaceManager (i.e. block manager v2) is now the default. Setting this flag to True or False has no effect on vLLM behavior. (default: True) --uvicorn-log-level {debug,info,warning,error,critical,trace} Log level for uvicorn. (default: info) -h, --help show this help message and exit
ModelConfig: Configuration for the model.
  • -allowed-local-media-path ALLOWED_LOCAL_MEDIA_PATH Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments. (default: ) --code-revision CODE_REVISION The specific revision to use for the model code on the Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version. (default: None) --config-format {auto,hf,mistral} The format of the model config to load: - "auto" will try to load the config in hf format if available else it will try to load in mistral format. - "hf" will load the config in hf format. - "mistral" will load the config in mistral format. (default: auto) --disable-async-output-proc Disable async output processing. This may result in lower performance. (default: False) --disable-cascade-attn, --no-disable-cascade-attn Disable cascade attention for V1. While cascade attention does not change the mathematical correctness, disabling it could be useful for preventing potential numerical issues. Note that even if this is set to False, cascade attention will be only used when the heuristic tells that it's beneficial. (default: False) --disable-sliding-window, --no-disable-sliding-window Whether to disable sliding window. If True, we will disable the sliding window functionality of the model, capping to sliding window size. If the model does not support sliding window, this argument is ignored. (default: False) --dtype {auto,bfloat16,float,float16,float32,half} Data type for model weights and activations: - "auto" will use FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models. - "half" for FP16. Recommended for AWQ quantization. - "float16" is the same as "half". - "bfloat16" for a balance between precision and range. - "float" is shorthand for FP32 precision. - "float32" for FP32 precision. (default: auto) --enable-prompt-embeds, --no-enable-prompt-embeds If True, enables passing text embeddings as inputs via the prompt_embeds key. Note that enabling this will double the time required for graph compilation. (default: False) --enable-sleep-mode, --no-enable-sleep-mode Enable sleep mode for the engine (only cuda platform is supported). (default: False) --enforce-eager, --no-enforce-eager Whether to always use eager-mode PyTorch. If True, we will disable CUDA graph and always execute the model in eager mode. If False, we will use CUDA graph and eager execution in hybrid for maximal performance and flexibility. (default: False) --generation-config GENERATION_CONFIG The folder path to the generation config. Defaults to "auto", the generation config will be loaded from model path. If set to "vllm", no generation config is loaded, vLLM defaults will be used. If set to a folder path, the generation config will be loaded from the specified folder path. If max_new_tokens is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests. (default: auto) --hf-config-path HF_CONFIG_PATH Name or path of the Hugging Face config to use. If unspecified, model name or path will be used. (default: None) --hf-overrides HF_OVERRIDES If a dictionary, contains arguments to be forwarded to the Hugging Face config. If a callable, it is called to update the HuggingFace config. (default: {}) --hf-token [HF_TOKEN] The token to use as HTTP bearer authorization for remote files . If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). (default: None) --logits-processor-pattern LOGITS_PROCESSOR_PATTERN Optional regex pattern specifying valid logits processor qualified names that can be passed with the logits_processors extra completion argument. Defaults to None, which allows no processors. (default: None) --max-logprobs MAX_LOGPROBS Maximum number of log probabilities to return when logprobs is specified in SamplingParams. The default value comes the default for the OpenAI Chat Completions API. (default: 20) --max-model-len MAX_MODEL_LEN Model context length (prompt and output). If unspecified, will be automatically derived from the model config. When passing via -max-model-len, supports k/m/g/K/M/G in human-readable format. Examples: - 1k -> 1000 - 1K -> 1024 - 25.6k -> 25,600 (default: None) --max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE Maximum sequence len covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. Additionally for encoder-decoder models, if the sequence length of the encoder input is larger than this, we fall back to the eager mode. (default: 8192) --model-impl {auto,vllm,transformers} Which implementation of the model to use: - "auto" will try to use the vLLM implementation, if it exists, and fall back to the Transformers implementation if no vLLM implementation is available. - "vllm" will use the vLLM model implementation. - "transformers" will use the Transformers model implementation. (default: auto) --override-generation-config OVERRIDE_GENERATION_CONFIG Overrides or sets generation config. e.g. {"temperature": 0.5}. If used with -generation-config auto, the override parameters will be merged with the default config from the model. If used with -generation-config vllm, only the override parameters are used. Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: {}) --override-neuron-config OVERRIDE_NEURON_CONFIG Initialize non-default neuron config or override default neuron config that are specific to Neuron devices, this argument will be used to configure the neuron config that can not be gathered from the vllm arguments. e.g. {"cast_logits_dtype": "bloat16"}. Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: {}) --override-pooler-config OVERRIDE_POOLER_CONFIG Initialize non-default pooling config or override default pooling config for the pooling model. e.g. {"pooling_type": "mean", "normalize": false}. (default: None) --quantization {aqlm,auto-round,awq,awq_marlin,bitblas,bitsandbytes,compressed-tensors,deepspeedfp,experts_int8,fbgemm_fp8,fp8,gguf,gptq,gptq_bitblas,gptq_marlin,gptq_marlin_24,hqq,ipex,marlin,modelopt,modelopt_fp4,moe_wna16,neuron_quant,ptpc_fp8,qqq,quark,torchao,tpu_int8,None}, -q {aqlm,auto-round,awq,awq_marlin,bitblas,bitsandbytes,compressed-tensors,deepspeedfp,experts_int8,fbgemm_fp8,fp8,gguf,gptq,gptq_bitblas,gptq_marlin,gptq_marlin_24,hqq,ipex,marlin,modelopt,modelopt_fp4,moe_wna16,neuron_quant,ptpc_fp8,qqq,quark,torchao,tpu_int8,None} Method used to quantize the weights. If None, we first check the quantization_config attribute in the model config file. If that is None, we assume the model weights are not quantized and use dtype to determine the data type of the weights. (default: None) --revision REVISION The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version. (default: None) --rope-scaling ROPE_SCALING RoPE scaling configuration. For example, {"rope_type":"dynamic","factor":2.0}. Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: {}) --rope-theta ROPE_THETA RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model. (default: None) --seed SEED Random seed for reproducibility. Initialized to None in V0, but initialized to 0 in V1. (default: None) --served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...] The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response will be the first name in this list. If not specified, the model name will be the same as the -model argument. Noted that this name(s) will also be used in model_name tag content of prometheus metrics, if multiple names provided, metrics tag will take the first one. (default: None) --skip-tokenizer-init, --no-skip-tokenizer-init Skip initialization of tokenizer and detokenizer. Expects valid prompt_token_ids and None for prompt from the input. The generated output will contain token ids. (default: False) --task {auto,classify,draft,embed,embedding,generate,reward,score,transcription} The task to use the model for. Each vLLM instance only supports one task, even if the same model can be used for multiple tasks. When the model only supports one task, "auto" can be used to select it; otherwise, you must specify explicitly which task to use. (default: auto) --tokenizer TOKENIZER Name or path of the Hugging Face tokenizer to use. If unspecified, model name or path will be used. (default: None) --tokenizer-mode {auto,custom,mistral,slow} Tokenizer mode: - "auto" will use the fast tokenizer if available. - "slow" will always use the slow tokenizer. - "mistral" will always use the tokenizer from mistral_common. - "custom" will use --tokenizer to select the preregistered tokenizer. (default: auto) --tokenizer-revision TOKENIZER_REVISION The specific revision to use for the tokenizer on the Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version. (default: None) --trust-remote-code, --no-trust-remote-code Trust remote code (e.g., from HuggingFace) when downloading the model and tokenizer. (default: False)
LoadConfig: Configuration for loading the model weights.
  • -download-dir DOWNLOAD_DIR Directory to download and load the weights, default to the default cache directory of Hugging Face. (default: None) --ignore-patterns IGNORE_PATTERNS [IGNORE_PATTERNS ...] The list of patterns to ignore when loading the model. Default to "original/**/*" to avoid repeated loading of llama's checkpoints. (default: None) --load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral,runai_streamer,runai_streamer_sharded,fastsafetensors} The format of the model weights to load: - "auto" will try to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available. - "pt" will load the weights in the pytorch bin format. - "safetensors" will load the weights in the safetensors format. - "npcache" will load the weights in pytorch format and store a numpy cache to speed up the loading. - "dummy" will initialize the weights with random values, which is mainly for profiling. - "tensorizer" will use CoreWeave's tensorizer library for fast weight loading. See the Tensorize vLLM Model script in the Examples section for more information. - "runai_streamer" will load the Safetensors weights using Run:ai Model Streamer. - "bitsandbytes" will load the weights using bitsandbytes quantization. - "sharded_state" will load weights from pre-sharded checkpoint files, supporting efficient loading of tensor-parallel models. - "gguf" will load weights from GGUF format files (details specified in https://github.com/ggml- org/ggml/blob/master/docs/gguf.md). - "mistral" will load weights from consolidated safetensors files used by Mistral models. (default: auto) --model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG Extra config for model loader. This will be passed to the model loader corresponding to the chosen load_format. Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: {}) --pt-load-map-location PT_LOAD_MAP_LOCATION pt_load_map_location: the map location for loading pytorch checkpoint, to support loading checkpoints can only be loaded on certain devices like "cuda", this is equivalent to {"": "cuda"}. Another supported format is mapping from different devices like from GPU 1 to GPU 0: {"cuda:1": "cuda:0"}. Note that when passed from command line, the strings in dictionary needs to be double quoted for json parsing. For more details, see original doc for map_location in https://pytorch.org/docs/stable/generated/torch.load.html (default: cpu) --qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH The -qlora-adapter-name-or-path has no effect, do not set it, and it will be removed in v0.10.0. (default: None) --use-tqdm-on-load, --no-use-tqdm-on-load Whether to enable tqdm for showing progress bar when loading model weights. (default: True)
DecodingConfig: Dataclass which contains the decoding strategy of the engine.
  • -enable-reasoning, --no-enable-reasoning [DEPRECATED] The -enable-reasoning flag is deprecated as of v0.9.0. Use -reasoning-parser to specify the reasoning parser backend instead. This flag (-enable-reasoning) will be removed in v0.10.0. When -reasoning-parser is specified, reasoning mode is automatically enabled. (default: None) --guided-decoding-backend {auto,guidance,lm-format-enforcer,outlines,xgrammar} Which engine will be used for guided decoding (JSON schema / regex etc) by default. With "auto", we will make opinionated choices based on request contents and what the backend libraries currently support, so the behavior is subject to change in each release. (default: auto) --guided-decoding-disable-additional-properties, --no-guided-decoding-disable-additional-properties If True, the guidance backend will not use additionalProperties in the JSON schema. This is only supported for the guidance backend and is used to better align its behaviour with outlines and xgrammar. (default: False) --guided-decoding-disable-any-whitespace, --no-guided-decoding-disable-any-whitespace If True, the model will not generate any whitespace during guided decoding. This is only supported for xgrammar and guidance backends. (default: False) --guided-decoding-disable-fallback, --no-guided-decoding-disable-fallback If True, vLLM will not fallback to a different backend on error. (default: False) --reasoning-parser {deepseek_r1,granite,qwen3} Select the reasoning parser depending on the model that you're using. This is used to parse the reasoning content into OpenAI API format. (default: )
ParallelConfig: Configuration for the distributed execution.
  • -data-parallel-address DATA_PARALLEL_ADDRESS, -dpa DATA_PARALLEL_ADDRESS Address of data parallel cluster head-node. (default: None) --data-parallel-rpc-port DATA_PARALLEL_RPC_PORT, -dpp DATA_PARALLEL_RPC_PORT Port for data parallel RPC communication. (default: None) --data-parallel-size DATA_PARALLEL_SIZE, -dp DATA_PARALLEL_SIZE Number of data parallel groups. MoE layers will be sharded according to the product of the tensor parallel size and data parallel size. (default: 1) --data-parallel-size-local DATA_PARALLEL_SIZE_LOCAL, -dpl DATA_PARALLEL_SIZE_LOCAL Number of data parallel replicas to run on this node. (default: None) --disable-custom-all-reduce, --no-disable-custom-all-reduce Disable the custom all-reduce kernel and fall back to NCCL. (default: False) --distributed-executor-backend {external_launcher,mp,ray,uni,None} Backend to use for distributed model workers, either "ray" or "mp" (multiprocessing). If the product of pipeline_parallel_size and tensor_parallel_size is less than or equal to the number of GPUs available, "mp" will be used to keep processing on a single host. Otherwise, this will default to "ray" if Ray is installed and fail otherwise. Note that tpu and hpu only support Ray for distributed inference. (default: None) --enable-expert-parallel, --no-enable-expert-parallel Use expert parallelism instead of tensor parallelism for MoE layers. (default: False) --max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERS Maximum number of parallel loading workers when loading model sequentially in multiple batches. To avoid RAM OOM when using tensor parallel and large models. (default: None) --pipeline-parallel-size PIPELINE_PARALLEL_SIZE, -pp PIPELINE_PARALLEL_SIZE Number of pipeline parallel groups. (default: 1) --ray-workers-use-nsight, --no-ray-workers-use-nsight Whether to profile Ray workers with nsight, see https://docs.ray.io/en/latest/ray- observability/user-guides/profiling.html#profiling-nsight-profiler. (default: False) --tensor-parallel-size TENSOR_PARALLEL_SIZE, -tp TENSOR_PARALLEL_SIZE Number of tensor parallel groups. (default: 1) --worker-cls WORKER_CLS The full name of the worker class to use. If "auto", the worker class will be determined based on the platform. (default: auto) --worker-extension-cls WORKER_EXTENSION_CLS The full name of the worker extension class to use. The worker extension class is dynamically inherited by the worker class. This is used to inject new attributes and methods to the worker class for use in collective_rpc calls. (default: )
CacheConfig: Configuration for the KV cache.
  • -block-size {1,8,16,32,64,128} Size of a contiguous cache block in number of tokens. This is ignored on neuron devices and set to -max-model-len. On CUDA devices, only block sizes up to 32 are supported. On HPU devices, block size defaults to 128. This config has no static default. If left unspecified by the user, it will be set in Platform.check_and_update_configs() based on the current platform. (default: None) --calculate-kv-scales, --no-calculate-kv-scales This enables dynamic calculation of k_scale and v_scale when kv_cache_dtype is fp8. If False, the scales will be loaded from the model checkpoint if available. Otherwise, the scales will default to 1.0. (default: False) --cpu-offload-gb CPU_OFFLOAD_GB The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory on the fly in each model forward pass. (default: 0) --enable-prefix-caching, --no-enable-prefix-caching Whether to enable prefix caching. Disabled by default for V0. Enabled by default for V1. (default: None) --gpu-memory-utilization GPU_MEMORY_UTILIZATION The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If unspecified, will use the default value of 0.9. This is a per-instance limit, and only applies to the current vLLM instance. It does not matter if you have another vLLM instance running on the same GPU. For example, if you have two vLLM instances running on the same GPU, you can set the GPU memory utilization to 0.5 for each instance. (default: 0.9) --kv-cache-dtype {auto,fp8,fp8_e4m3,fp8_e5m2} Data type for kv cache storage. If "auto", will use model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3). (default: auto) --num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE Number of GPU blocks to use. This overrides the profiled num_gpu_blocks if specified. Does nothing if None. Used for testing preemption. (default: None) --prefix-caching-hash-algo {builtin,sha256} Set the hash algorithm for prefix caching: - "builtin" is Python's built-in hash. - "sha256" is collision resistant but with certain overheads. (default: builtin) --swap-space SWAP_SPACE Size of the CPU swap space per GPU (in GiB). (default: 4)
TokenizerPoolConfig: This config is deprecated and will be removed in a future release.
  • -tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG This parameter is deprecated and will be removed in a future release. Passing this parameter will have no effect. Please remove it from your configurations. Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: {}) --tokenizer-pool-size TOKENIZER_POOL_SIZE This parameter is deprecated and will be removed in a future release. Passing this parameter will have no effect. Please remove it from your configurations. (default: 0) --tokenizer-pool-type TOKENIZER_POOL_TYPE This parameter is deprecated and will be removed in a future release. Passing this parameter will have no effect. Please remove it from your configurations. (default: ray)
MultiModalConfig: Controls the behavior of multimodal models.
  • -disable-mm-preprocessor-cache, --no-disable-mm-preprocessor-cache If True, disable caching of the processed multi-modal inputs. (default: False) --limit-mm-per-prompt LIMIT_MM_PER_PROMPT The maximum number of input items allowed per prompt for each modality. Defaults to 1 (V0) or 999 (V1) for each modality. For example, to allow up to 16 images and 2 videos per prompt: {"images": 16, "videos": 2} Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: {}) --mm-processor-kwargs MM_PROCESSOR_KWARGS Overrides for the multi-modal processor obtained from transformers.AutoProcessor.from_pretrained. The available overrides depend on the model that is being run. For example, for Phi-3-Vision: {"num_crops": 4}. Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: None)
LoRAConfig: Configuration for LoRA.
  • -enable-lora, --no-enable-lora If True, enable handling of LoRA adapters. (default: None) --enable-lora-bias, --no-enable-lora-bias Enable bias for LoRA adapters. (default: False) --fully-sharded-loras, --no-fully-sharded-loras By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this will use the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster. (default: False) --long-lora-scaling-factors LONG_LORA_SCALING_FACTORS [LONG_LORA_SCALING_FACTORS ...] Specify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling factors to be used at the same time. If not specified, only adapters trained with the base model scaling factor are allowed. (default: None) --lora-dtype {auto,bfloat16,float16} Data type for LoRA. If auto, will default to base model dtype. (default: auto) --lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary). (default: 256) --max-cpu-loras MAX_CPU_LORAS Maximum number of LoRAs to store in CPU memory. Must be >= than max_loras. (default: None) --max-lora-rank MAX_LORA_RANK Max LoRA rank. (default: 16) --max-loras MAX_LORAS Max number of LoRAs in a single batch. (default: 1)
PromptAdapterConfig: Configuration for PromptAdapters.
  • -enable-prompt-adapter, --no-enable-prompt-adapter If True, enable handling of PromptAdapters. (default: None) --max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN Max number of PromptAdapters tokens. (default: 0) --max-prompt-adapters MAX_PROMPT_ADAPTERS Max number of PromptAdapters in a batch. (default: 1)
DeviceConfig: Configuration for the device to use for vLLM execution.
  • -device {auto,cpu,cuda,hpu,neuron,tpu,xpu} Device type for vLLM execution. This parameter is deprecated and will be removed in a future release. It will now be set automatically based on the current platform. (default: auto)
SpeculativeConfig: Configuration for speculative decoding.
  • -speculative-config SPECULATIVE_CONFIG The configurations for speculative decoding. Should be a JSON string. (default: None)
ObservabilityConfig: Configuration for observability - metrics and tracing.
  • -collect-detailed-traces {all,model,worker,None} [{all,model,worker,None} ...] It makes sense to set this only if -otlp-traces-endpoint is set. If set, it will collect detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact. Note that collecting detailed timing information for each request can be expensive. (default: None) --otlp-traces-endpoint OTLP_TRACES_ENDPOINT Target URL to which OpenTelemetry traces will be sent. (default: None) --show-hidden-metrics-for-version SHOW_HIDDEN_METRICS_FOR_VERSION Enable deprecated Prometheus metrics that have been hidden since the specified version. For example, if a previously deprecated metric has been hidden since the v0.7.0 release, you use -show-hidden-metrics-for-version=0.7 as a temporary escape hatch while you migrate to new metrics. The metric is likely to be removed completely in an upcoming release. (default: None)
SchedulerConfig: Scheduler configuration.
  • -cuda-graph-sizes CUDA_GRAPH_SIZES [CUDA_GRAPH_SIZES ...] Cuda graph capture sizes, default is 512. 1. if one value is provided, then the capture list would follow the pattern: [1, 2, 4] + [i for i in range(8, cuda_graph_sizes + 1, 8)] 2. more than one value (e.g. 1 2 128) is provided, then the capture list will follow the provided list. (default: [512]) --disable-chunked-mm-input, --no-disable-chunked-mm-input If set to true and chunked prefill is enabled, we do not want to partially schedule a multimodal item. Only used in V1 This ensures that if a request has a mixed prompt (like text tokens TTTT followed by image tokens IIIIIIIIII) where only some image tokens can be scheduled (like TTTTIIIII, leaving IIIII), it will be scheduled as TTTT in one step and IIIIIIIIII in the next. (default: False) --enable-chunked-prefill, --no-enable-chunked-prefill If True, prefill requests can be chunked based on the remaining max_num_batched_tokens. (default: None) --long-prefill-token-threshold LONG_PREFILL_TOKEN_THRESHOLD For chunked prefill, a request is considered long if the prompt is longer than this number of tokens. (default: 0) --max-long-partial-prefills MAX_LONG_PARTIAL_PREFILLS For chunked prefill, the maximum number of prompts longer than long_prefill_token_threshold that will be prefilled concurrently. Setting this less than max_num_partial_prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency. (default: 1) --max-num-batched-tokens MAX_NUM_BATCHED_TOKENS Maximum number of tokens to be processed in a single iteration. This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config based on the usage context. (default: None) --max-num-partial-prefills MAX_NUM_PARTIAL_PREFILLS For chunked prefill, the maximum number of sequences that can be partially prefilled concurrently. (default: 1) --max-num-seqs MAX_NUM_SEQS Maximum number of sequences to be processed in a single iteration. This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config based on the usage context. (default: None) --multi-step-stream-outputs, --no-multi-step-stream-outputs If False, then multi-step will stream outputs at the end of all steps (default: True) --num-lookahead-slots NUM_LOOKAHEAD_SLOTS The number of slots to allocate per sequence per step, beyond the known token ids. This is used in speculative decoding to store KV activations of tokens which may or may not be accepted. NOTE: This will be replaced by speculative config in the future; it is present to enable correctness tests until then. (default: 0) --num-scheduler-steps NUM_SCHEDULER_STEPS Maximum number of forward steps per scheduler call. (default: 1) --preemption-mode {recompute,swap,None} Whether to perform preemption by swapping or recomputation. If not specified, we determine the mode as follows: We use recomputation by default since it incurs lower overhead than swapping. However, when the sequence group has multiple sequences (e.g., beam search), recomputation is not currently supported. In such a case, we use swapping instead. (default: None) --scheduler-cls SCHEDULER_CLS The scheduler class to use. "vllm.core.scheduler.Scheduler" is the default scheduler. Can be a class directly or the path to a class of form "mod.custom_class". (default: vllm.core.scheduler.Scheduler) --scheduler-delay-factor SCHEDULER_DELAY_FACTOR Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt. (default: 0.0) --scheduling-policy {fcfs,priority} The scheduling policy to use: - "fcfs" means first come first served, i.e. requests are handled in order of arrival. - "priority" means requests are handled based on given priority (lower value means earlier handling) and time of arrival deciding any ties). (default: fcfs)
VllmConfig: Dataclass which contains all vllm-related configuration. This simplifies passing around the distinct configurations in the codebase.
  • -additional-config ADDITIONAL_CONFIG Additional config for specified platform. Different platforms may support different configs. Make sure the configs are valid for the platform you are using. Contents must be hashable. (default: {}) --compilation-config COMPILATION_CONFIG, -O COMPILATION_CONFIG torch.compile configuration for the model. When it is a number (0, 1, 2, 3), it will be interpreted as the optimization level. NOTE: level 0 is the default level without any optimization. level 1 and 2 are for internal testing only. level 3 is the recommended level for production. Following the convention of traditional compilers, using O without space is also supported. O3 is equivalent to O 3. You can specify the full compilation config like so: {"level": 3, "cudagraph_capture_sizes": [1, 2, 4, 8]} Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: {"inductor_compile_config": {"enable_auto_functionalized_v2": false}}) --kv-events-config KV_EVENTS_CONFIG The configurations for event publishing. Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: None) --kv-transfer-config KV_TRANSFER_CONFIG The configurations for distributed KV cache transfer. Should either be a valid JSON string or JSON keys passed individually. For example, the following sets of arguments are equivalent: - -json-arg '{"key1": "value1", "key2": {"key3": "value2"}}' - -json-arg.key1 value1 --json-arg.key2.key3 value2 (default: None)
Tip: Use vllm serve --help=<keyword> to explore arguments from help.
  • To view a argument group: --help=ModelConfig
  • To view a single argument: --help=max-num-seqs
  • To search by keyword: --help=max
  • To list all groups: --help=listgroup
 
叔本华的钟相机入门记录(1)购物篇
Loading...