解决llama-cpp-python项目在构建安装轮时遇到的错误:pyproject.toml配置问题导致的失败

Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [34 lines of output]
      *** scikit-build-core 0.10.5 using CMake 3.30.2 (wheel)
      *** Configuring CMake...
      loading initial cache file /tmp/tmp12mmpfoy/build/CMakeInit.txt
      -- The C compiler identification is GNU 11.4.0
      -- The CXX compiler identification is GNU 11.4.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/gcc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/g++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Could NOT find Git (missing: GIT_EXECUTABLE)
      CMake Warning at vendor/llama.cpp/cmake/build-info.cmake:14 (message):
        Git not found.  Build info will not be accurate.
      Call Stack (most recent call first):
        vendor/llama.cpp/CMakeLists.txt:74 (include)
      
      
      CMake Error at vendor/llama.cpp/CMakeLists.txt:95 (message):
        LLAMA_CUBLAS is deprecated and will be removed in the future.
      
        Use GGML_CUDA instead
      
      Call Stack (most recent call first):
        vendor/llama.cpp/CMakeLists.txt:100 (llama_option_depr)
      
      
      -- Configuring incomplete, errors occurred!
      
      *** CMake configuration failed
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)

在Ubuntu 22.04 上不是Xinferenc,安装时报错如上。

说明一下,这台机器是部署大模型的服务器,有两块英伟达4090显卡,基础环境以及安装了CUDA,Pytorch基础计算包。基础环境安装xinference是没有问题的。就是安装好的xinference包后与原来运行大模型的环境冲突,所有我安装了conda,用conda新创建了一个环境xin_env,用xin_env环境安装xinference时报这个错。

解决办法:

1、在xin_env环境上安装CUDA和Pytorch

作者:swany

物联沃分享整理
物联沃-IOTWORD物联网 » 解决llama-cpp-python项目在构建安装轮时遇到的错误:pyproject.toml配置问题导致的失败

发表回复