Problem loading model

#1
by bjodah - opened

I'm probably doing something wrong, but any chance you can spot what that might be?

      --port 8000
      --served-model-name Devstral-Small-2-24B
      --model btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ
      --tokenizer_mode mistral
      --config_format mistral
      --load_format mistral
      --gpu-memory-utilization 0.8
      --max-model-len 128000
      --max-num-seqs 1
      --dtype float16
      --tool-call-parser mistral
      --enable-auto-tool-choice

Error message from vLLM nightly (docker.io/vllm/vllm-openai:nightly):

WARNING 01-02 15:44:03 [argparse_utils.py:195] With `vllm serve`, you should provide the model as a positional argument or in a config file instead of via the `--model` option. The `--model` option will be removed in v0.13.
(APIServer pid=1) INFO 01-02 15:44:03 [api_server.py:1277] vLLM API server version 0.14.0rc1.dev212+gcc410e864
(APIServer pid=1) INFO 01-02 15:44:03 [utils.py:253] non-default args: {'model_tag': 'btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ', 'enable_auto_tool_choice': True, 'tool_call_parser': 'mistral', 'model': 'btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ', 'tokenizer_mode': 'mistral', 'dtype': 'float16', 'max_model_len': 128000, 'served_model_name': ['Devstral-Small-2-24B', '#', 'RTX', 'A6000', 'has', 'compute', 'capability', '8.6,', 'hence', 'no', 'FP8:'], 'config_format': 'mistral', 'load_format': 'mistral', 'gpu_memory_utilization': 0.8, 'max_num_seqs': 1}
(APIServer pid=1) Traceback (most recent call last):
(APIServer pid=1)   File "/usr/local/bin/vllm", line 10, in <module>
(APIServer pid=1)     sys.exit(main())
(APIServer pid=1)              ^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/main.py", line 73, in main
(APIServer pid=1)     args.dispatch_function(args)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/serve.py", line 60, in cmd
(APIServer pid=1)     uvloop.run(run_server(args))
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=1)     return __asyncio.run(
(APIServer pid=1)            ^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1)     return runner.run(main)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1)     return self._loop.run_until_complete(task)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=1)     return await main
(APIServer pid=1)            ^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1324, in run_server
(APIServer pid=1)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1343, in run_server_worker
(APIServer pid=1)     async with build_async_engine_client(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 171, in build_async_engine_client
(APIServer pid=1)     async with build_async_engine_client_from_engine_args(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 197, in build_async_engine_client_from_engine_args
(APIServer pid=1)     vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=1)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1348, in create_engine_config
(APIServer pid=1)     model_config = self.create_model_config()
(APIServer pid=1)                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1205, in create_model_config
(APIServer pid=1)     return ModelConfig(
(APIServer pid=1)            ^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(APIServer pid=1)     s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=1) pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
(APIServer pid=1)   Value error, Failed to load mistral 'params.json' config for model btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ. Please check if the model is a mistral-format model and if the config file exists. [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]
(APIServer pid=1)     For further information visit https://errors.pydantic.dev/2.12/v/value_error

Never mind, I have some issue with command line argument passing, sorry for the noise. Will get back with a proper reproducer if issue persists.

bjodah changed discussion status to closed

Hmmm... I think I've fixed the argument passing, but problem seems to persist?

(APIServer pid=1) INFO 01-02 15:57:13 [api_server.py:1277] vLLM API server version 0.14.0rc1.dev212+gcc410e864
(APIServer pid=1) INFO 01-02 15:57:13 [utils.py:253] non-default args: {'model_tag': 'btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ', 'enable_auto_tool_choice': True, 'tool_call_parser': 'mistral', 'model': 'btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ', 'tokenizer_mode': 'mistral', 'dtype': 'float16', 'max_model_len': 128000, 'served_model_name': ['Devstral-Small-2-24B'], 'config_format': 'mistral', 'load_format': 'mistral', 'gpu_memory_utilization': 0.8, 'max_num_seqs': 1}
(APIServer pid=1) Traceback (most recent call last):
(APIServer pid=1)   File "/usr/local/bin/vllm", line 10, in <module>
(APIServer pid=1)     sys.exit(main())
(APIServer pid=1)              ^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/main.py", line 73, in main
(APIServer pid=1)     args.dispatch_function(args)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/serve.py", line 60, in cmd
(APIServer pid=1)     uvloop.run(run_server(args))
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=1)     return __asyncio.run(
(APIServer pid=1)            ^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1)     return runner.run(main)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1)     return self._loop.run_until_complete(task)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=1)     return await main
(APIServer pid=1)            ^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1324, in run_server
(APIServer pid=1)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1343, in run_server_worker
(APIServer pid=1)     async with build_async_engine_client(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 171, in build_async_engine_client
(APIServer pid=1)     async with build_async_engine_client_from_engine_args(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 197, in build_async_engine_client_from_engine_args
(APIServer pid=1)     vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=1)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1348, in create_engine_config
(APIServer pid=1)     model_config = self.create_model_config()
(APIServer pid=1)                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1205, in create_model_config
(APIServer pid=1)     return ModelConfig(
(APIServer pid=1)            ^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(APIServer pid=1)     s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=1) pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
(APIServer pid=1)   Value error, Failed to load mistral 'params.json' config for model btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ. Please check if the model is a mistral-format model and if the config file exists. [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]
(APIServer pid=1)     For further information visit https://errors.pydantic.dev/2.12/v/value_error
bjodah changed discussion status to open

I should add without these flags:

      --tokenizer_mode mistral
      --config_format mistral
      --load_format mistral

The error looks like this:

WARNING 01-02 15:59:46 [argparse_utils.py:195] With `vllm serve`, you should provide the model as a positional argument or in a config file instead of via the `--model` option. The `--model` option will be removed in v0.13.
(APIServer pid=1) INFO 01-02 15:59:46 [api_server.py:1277] vLLM API server version 0.14.0rc1.dev212+gcc410e864
(APIServer pid=1) INFO 01-02 15:59:46 [utils.py:253] non-default args: {'model_tag': 'btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ', 'enable_auto_tool_choice': True, 'tool_call_parser': 'mistral', 'model': 'btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ', 'dtype': 'float16', 'max_model_len': 128000, 'served_model_name': ['Devstral-Small-2-24B'], 'gpu_memory_utilization': 0.8, 'max_num_seqs': 1}
(APIServer pid=1) Traceback (most recent call last):
(APIServer pid=1)   File "/usr/local/bin/vllm", line 10, in <module>
(APIServer pid=1)     sys.exit(main())
(APIServer pid=1)              ^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/main.py", line 73, in main
(APIServer pid=1)     args.dispatch_function(args)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/serve.py", line 60, in cmd
(APIServer pid=1)     uvloop.run(run_server(args))
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=1)     return __asyncio.run(
(APIServer pid=1)            ^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1)     return runner.run(main)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1)     return self._loop.run_until_complete(task)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=1)     return await main
(APIServer pid=1)            ^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1324, in run_server
(APIServer pid=1)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1343, in run_server_worker
(APIServer pid=1)     async with build_async_engine_client(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 171, in build_async_engine_client
(APIServer pid=1)     async with build_async_engine_client_from_engine_args(
(APIServer pid=1)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1)     return await anext(self.gen)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 197, in build_async_engine_client_from_engine_args
(APIServer pid=1)     vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=1)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1348, in create_engine_config
(APIServer pid=1)     model_config = self.create_model_config()
(APIServer pid=1)                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1205, in create_model_config
(APIServer pid=1)     return ModelConfig(
(APIServer pid=1)            ^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(APIServer pid=1)     s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/config/model.py", line 461, in __post_init__
(APIServer pid=1)     hf_config = get_config(
(APIServer pid=1)                 ^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 635, in get_config
(APIServer pid=1)     config_dict, config = config_parser.parse(
(APIServer pid=1)                           ^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 168, in parse
(APIServer pid=1)     config = AutoConfig.from_pretrained(
(APIServer pid=1)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/configuration_auto.py", line 1372, in from_pretrained
(APIServer pid=1)     return config_class.from_dict(config_dict, **unused_kwargs)
(APIServer pid=1)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 808, in from_dict
(APIServer pid=1)     config = cls(**config_dict)
(APIServer pid=1)              ^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/models/mistral3/configuration_mistral3.py", line 113, in __init__
(APIServer pid=1)     text_config = CONFIG_MAPPING[text_config["model_type"]](**text_config)
(APIServer pid=1)                   ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/configuration_auto.py", line 1048, in __getitem__
(APIServer pid=1)     raise KeyError(key)
(APIServer pid=1) KeyError: 'ministral3'

I should add without these flags:

      --tokenizer_mode mistral
      --config_format mistral
      --load_format mistral

The error looks like this:

WARNING 01-02 15:59:46 [argparse_utils.py:195] With `vllm serve`, you should provide the model as a positional argument or in a config file instead of via the `--model` option. The `--model` option will be removed in v0.13.
(APIServer pid=1) INFO 01-02 15:59:46 [api_server.py:1277] vLLM API server version 0.14.0rc1.dev212+gcc410e864
(APIServer pid=1) INFO 01-02 15:59:46 [utils.py:253] non-default args: {'model_tag': 'btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ', 'enable_auto_tool_choice': True, 'tool_call_parser': 'mistral', 'model': 'btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ', 'dtype': 'float16', 'max_model_len': 128000, 'served_model_name': ['Devstral-Small-2-24B'], 'gpu_memory_utilization': 0.8, 'max_num_seqs': 1}
(APIServer pid=1) Traceback (most recent call last):
....
(APIServer pid=1)                   ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1)   File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/configuration_auto.py", line 1048, in __getitem__
(APIServer pid=1)     raise KeyError(key)
(APIServer pid=1) KeyError: 'ministral3'

Hi @bjodah ,

What is your transformers version? The KeyError for ministral3 indicates you are not on Transformers v5. You need to upgrade to the release candidate for transformers to use the devstral 2 model. You don't want the tokenizer_mode, config_format, or load_format for this model either. I used a standard config.json file.

pip install --upgrade --pre transformers

Sign up or log in to comment