Docker Package Installation Troubles

We are reaching out for assistance with a specific issue we’ve encountered while working with Docker. Despite successfully installing and running the challenge Docker container, we’re facing difficulties in installing a new package, which we’ll refer to as ‘langchain’ (but we did try other packages as well), along with its dependencies.

Our main challenge is installing new dependencies within our Docker environment. Despite several attempts, we haven’t been successful. Here’s a summary of what we’ve tried:

  • Direct Installation in Docker: Logged into the Docker and used pip install to add the package, followed by creating a new ‘conda-lock-gpu’ file. This method was not successful.
  • Conda-Lock File Approach: Attempted to create a new ‘conda-lock-gpu’ file from ‘environment-gpu’, both inside and outside the Docker environment. This also did not yield a positive result.
  • Modifying Dockerfile and Entrypoint: Edited the Dockerfile and the entrypoint to include pip/mamba install commands for the package. Unfortunately, this approach also failed.

We are seeking advice, guidance, or any relevant experience from the community that could help us resolve this issue. If anyone has faced a similar challenge or knows of a guide or method to successfully install new packages in the challenge’s Docker, your input would be greatly appreciated.

We understand that once reached a solution we will be making an open pull request with the new package for the final submission, but our current issue lies in the initial installation process we wanted to test offline before issuing the pull.

Thanks!

Hey @kevinr-

Can you include more information about what’s not working for you, like any error messages you are receiving?

I was able to add langchain as a dependency by following the instructions on the runtime repo:

  1. I edited the environment-cpu.yml and environment-gpu.yml files so they include langchain (unpinned). Each .yml file looked like this:

    ...
    - keras=2.13.1
    - langchain
    - lightgbm=4.1.0
    ...
    
  2. I then ran make update-lockfiles which solved for a compatible version of langchain (in this case 0.0.346) and saved the corresponding conda-lock lockfile. (For the PR, you would want to modify each .yml file to pin the solved version of langchain)

  3. I ran make build to rebuild the docker container.

You may also want to add some tests to runtime/test_packages.yml that ensure langchain is working properly.

Please follow up if you continue running into issues!

-Chris

thanks @chrisk-dd

We did try with both conda and pip, following the instructions, and we get the same error.
(we are using pip because one package we want to install, langchain-community, is not available in conda)

when we run make update-lockfiles we get

Locking the CPU environment
conda-lock \
        --mamba \
        -p linux-64 \
        --without-cuda \
        -f runtime/environment-cpu.yml \
        --lockfile runtime/conda-lock-cpu.yml
Locking dependencies for ['linux-64']...
INFO:conda_lock.conda_solver:linux-64 using specs ['accelerate 0.25.0.*', 'aiofiles 23.2.1.*', 'aiohttp 3.8.5.*', 'dill 0.3.7.*', 'diskcache 5.6.3.*', 'einops 0.7.0.*', 'faiss-cpu 1.7.4.*', 'gensim 4.3.2.*', 'jsonpickle 3.0.2.*', 'keras 2.13.1.*', 'lightgbm 4.1.0.*', 'loguru 0.7.*', 'more-itertools 10.2.0.*', 'numba 0.58.*', 'numpy 1.25.2.*', 'pandas 2.1.3.*', 'peft 0.7.1.*', 'pip *', 'polars 0.19.0.*', 'pytest 7.4.*', 'python 3.10.13.*', 'pytorch-lightning 2.1.1.*', 'pytorch::pytorch==2.1.1[build=*cpu*]', 'ray-default 2.8.1.*', 'scikit-learn 1.3.2.*', 'scipy 1.9.3.*', 'sentencepiece 0.1.99.*', 'sentence-transformers 2.2.2.*', 'spacy 3.7.2.*', 'statsmodels 0.14.0.*', 'tensorflow-base 2.13.1 cpu*', 'tensorflow 2.13.1 cpu*', 'tqdm 4.66.*', 'transformers 4.35.2.*', 'xarray 2023.11.0.*']                                           - Install lock using: conda-lock install --name YOURENV runtime/conda-lock-cpu.yml
Locking the GPU environment
conda-lock \
        --mamba \
        -p linux-64 \
        --with-cuda 11.8 \
        -f runtime/environment-gpu.yml \
        --lockfile runtime/conda-lock-gpu.yml
Locking dependencies for ['linux-64']...
INFO:conda_lock.conda_solver:linux-64 using specs ['accelerate 0.25.0.*', 'aiofiles 23.2.1.*', 'aiohttp 3.8.5.*', 'cudatoolkit 11.8.*', 'cupy 12.3.0.*', 'dill 0.3.7.*', 'diskcache 5.6.3.*', 'einops 0.7.0.*', 'faiss-gpu 1.7.4.*', 'gensim 4.3.2.*', 'jsonpickle 3.0.2.*', 'keras 2.13.1.*', 'lightgbm 4.1.0.*', 'loguru 0.7.*', 'more-itertools 10.2.0.*', 'numba 0.58.*', 'numpy 1.25.2.*', 'nvidia::cuda-cudart=11.8', 'pandas 2.1.3.*', 'peft 0.7.1.*', 'pip *', 'polars 0.19.0.*', 'pytest 7.4.*', 'python 3.10.13.*', 'pytorch-lightning 2.1.1.*', 'pytorch::pytorch==2.1.1[build=*cuda11.8*]', 'ray-default 2.8.1.*', 'scikit-learn 1.3.2.*', 'scipy 1.9.3.*', 'sentencepiece 0.1.99.*', 'sentence-transformers 2.2.2.*', 'spacy 3.7.2.*', 'statsmodels 0.14.0.*', 'tensorflow-base 2.13.1 cuda118*', 'tensorflow 2.13.1 cuda118*', 'tqdm 4.66.*', 'transformers 4.35.2.*', 'xarray 2023.11.0.*', 'xformers::xformers==0.0.23[build=*cu11.8*]']
*', 'scipy 1.9.3.*', 'sentencepiece 0.1.99.*', 'sentence-transformers 2.2.2.*', 'spacy 3.7.2.*', 'statsmodels 0.14.0.*', 'tensorflow-base 2.13.1 cuda118*', 'tensorflow 2.13.1 cuda118*', 'tqdm 4.66.*[1007/1807]
rmers 4.35.2.*', 'xarray 2023.11.0.*', 'xformers::xformers==0.0.23[build=*cu11.8*]']
 - Install lock using: conda-lock install --name YOURENV runtime/conda-lock-gpu.yml
make: python: File o directory non esistente
make: *** [Makefile:139: update-lockfiles] Errore 127

But we get the lock files changes.

If we then run make build
we get

[...]
Successfully installed anyio-4.2.0 dataclasses-json-0.6.3 greenlet-3.0.3 jsonpatch-1.33 jsonpointer-2.4 langchain-0.1.4 langchain-community-0.0.16 langchain-core-0.1.17 langsmith-0.0.85 marshmallow-3.20.2 mypy-extensions-1.0.0 sniffio-1.3.0 sqlalchemy-2.0.24 tenacity-8.2.3 typing-inspect-0.9.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
info     libmamba ****************** Backtrace Start ******************
debug    libmamba Loading configuration
trace    libmamba Compute configurable 'create_base'                                                                                                                                                             trace    libmamba Compute configurable 'no_env'
trace    libmamba Compute configurable 'no_rc'
trace    libmamba Compute configurable 'rc_files'                                                                                                                                                                trace    libmamba Compute configurable 'root_prefix'
trace    libmamba Get RC files configuration from locations up to HomeDir
trace    libmamba Configuration not found at '/root/.mambarc'
trace    libmamba Configuration not found at '/root/.mamba/mambarc.d'                                                                                                                                            trace    libmamba Configuration not found at '/root/.mamba/mambarc'
trace    libmamba Configuration not found at '/root/.mamba/.mambarc'
trace    libmamba Configuration not found at '/root/.config/mamba/mambarc.d'
trace    libmamba Configuration not found at '/root/.config/mamba/mambarc'                                                                                                                                       trace    libmamba Configuration not found at '/root/.config/mamba/.mambarc'
trace    libmamba Configuration not found at '/root/.condarc'
trace    libmamba Configuration not found at '/root/.conda/condarc.d'
trace    libmamba Configuration not found at '/root/.conda/condarc'                                                                                                                                              trace    libmamba Configuration not found at '/root/.conda/.condarc'
trace    libmamba Configuration not found at '/root/.config/mamba/../conda/condarc.d'
trace    libmamba Configuration not found at '/root/.config/mamba/../conda/condarc'
trace    libmamba Configuration not found at '/root/.config/mamba/../conda/.condarc'                                                                                                                             trace    libmamba Configuration not found at '/opt/conda/.mambarc'
trace    libmamba Configuration not found at '/opt/conda/condarc.d'
trace    libmamba Configuration not found at '/opt/conda/condarc'
trace    libmamba Configuration not found at '/opt/conda/.condarc'
trace    libmamba Configuration not found at '/var/lib/conda/.mambarc'                                                                                                                                           trace    libmamba Configuration not found at '/var/lib/conda/condarc.d/'
trace    libmamba Configuration not found at '/var/lib/conda/condarc'
trace    libmamba Configuration not found at '/var/lib/conda/.condarc'
trace    libmamba Configuration not found at '/var/lib/conda/condarc'                                                                                                                                    [0/1807]
trace    libmamba Configuration not found at '/var/lib/conda/.condarc'                                                                                                                                           trace    libmamba Configuration not found at '/etc/conda/.mambarc'
trace    libmamba Configuration not found at '/etc/conda/condarc.d/'
trace    libmamba Configuration not found at '/etc/conda/condarc'                                                                                                                                                trace    libmamba Configuration not found at '/etc/conda/.condarc'
trace    libmamba Update configurable 'no_env'
trace    libmamba Compute configurable 'envs_dirs'
trace    libmamba Compute configurable 'file_specs'                                                                                                                                                              trace    libmamba Compute configurable 'spec_file_env_name'
trace    libmamba Compute configurable 'env_name'
trace    libmamba Compute configurable 'use_target_prefix_fallback'                                                                                                                                              trace    libmamba Compute configurable 'target_prefix'
trace    libmamba Get RC files configuration from locations up to TargetPrefix
trace    libmamba Configuration not found at '/root/.mambarc'                                                                                                                                                    trace    libmamba Configuration not found at '/root/.mamba/mambarc.d'
trace    libmamba Configuration not found at '/root/.mamba/mambarc'                                                                                                                                              trace    libmamba Configuration not found at '/root/.mamba/.mambarc'
trace    libmamba Configuration not found at '/root/.config/mamba/mambarc.d'                                                                                                                                     trace    libmamba Configuration not found at '/root/.config/mamba/mambarc'
trace    libmamba Configuration not found at '/root/.config/mamba/.mambarc'
trace    libmamba Configuration not found at '/root/.condarc'
trace    libmamba Configuration not found at '/root/.conda/condarc.d'                                                                                                                                            trace    libmamba Configuration not found at '/root/.conda/condarc'
trace    libmamba Configuration not found at '/root/.conda/.condarc'
trace    libmamba Configuration not found at '/root/.config/mamba/../conda/condarc.d'                                                                                                                            trace    libmamba Configuration not found at '/root/.config/mamba/../conda/condarc'
trace    libmamba Configuration not found at '/root/.config/mamba/../conda/.condarc'
trace    libmamba Configuration not found at '/opt/conda/.mambarc'
trace    libmamba Configuration not found at '/opt/conda/condarc.d'                                                                                                                                              trace    libmamba Configuration not found at '/opt/conda/condarc'
trace    libmamba Configuration not found at '/opt/conda/.condarc'
trace    libmamba Configuration not found at '/var/lib/conda/.mambarc'
trace    libmamba Configuration not found at '/var/lib/conda/condarc.d/'                                                                                                                                         trace    libmamba Configuration not found at '/var/lib/conda/condarc'
trace    libmamba Configuration not found at '/var/lib/conda/.condarc'
trace    libmamba Configuration not found at '/etc/conda/.mambarc'
trace    libmamba Configuration not found at '/etc/conda/condarc.d/'                                                                                                                                             trace    libmamba Configuration not found at '/etc/conda/condarc'
trace    libmamba Configuration not found at '/etc/conda/.condarc'
trace    libmamba Update configurable 'no_env'
trace    libmamba Compute configurable 'relocate_prefix'                                                                                                                                                         trace    libmamba Compute configurable 'target_prefix_checks'
error    libmamba No target prefix specified
critical libmamba Aborting.
info     libmamba ****************** Backtrace End ********************
The command '/usr/local/bin/_dockerfile_shell.sh micromamba install --name base --yes --file /tmp/conda-lock.yml &&     micromamba install --yes pip &&     micromamba clean --all --force-pkgs-dirs --yes' returned a non-zero code: 1
make: *** [Makefile:114: build] Errore 1

It looks like langchain-community has a conda-forge feedstock:

❯ mamba search langchain-community --channel conda-forge
Loading channels: done
# Name                       Version           Build  Channel             
langchain-community            0.0.2    pyhd8ed1ab_0  conda-forge         
langchain-community            0.0.7    pyhd8ed1ab_0  conda-forge         
langchain-community            0.0.7    pyhd8ed1ab_1  conda-forge         
langchain-community            0.0.8    pyhd8ed1ab_0  conda-forge         
langchain-community            0.0.9    pyhd8ed1ab_0  conda-forge         
langchain-community           0.0.11    pyhd8ed1ab_0  conda-forge         
langchain-community           0.0.12    pyhd8ed1ab_0  conda-forge         
langchain-community           0.0.13    pyhd8ed1ab_0  conda-forge         
langchain-community           0.0.15    pyhd8ed1ab_0  conda-forge         
langchain-community           0.0.16    pyhd8ed1ab_0  conda-forge         

so you should not have to install with pip.

It also looks like the way you are trying to pip-install the package is not correct (you’re getting a mamba error that you haven’t appropriately specified the environment you are trying to install to or update). If there were pip-only packages, you should specify them in the environment file as follows:

...
 - xarray=2023.11.0
  - pip:
    - langchain-community

Dear @chrisk-dd , we did try with the following envinronment (both cpu and gpu) but we are getting the same error. (notice langchain right after keras).

name: snomedct
channels:
  - conda-forge
dependencies:
  - accelerate=0.25.0
  - aiofiles=23.2.1
  - aiohttp=3.8.5
  - cudatoolkit=11.8
  - cupy=12.3.0
  - dill=0.3.7
  - diskcache=5.6.3
  - einops=0.7.0
  - faiss-gpu=1.7.4
  - gensim=4.3.2
  - jsonpickle=3.0.2
  - keras=2.13.1
  - langchain
  - lightgbm=4.1.0
  - loguru=0.7
  - more-itertools=10.2.0
  - numba=0.58
  - numpy=1.25.2
  - nvidia::cuda-cudart=11.8
  - pandas=2.1.3
  - peft=0.7.1
  - pip=23.3
  - polars=0.19.0
  - pytest=7.4
  - python=3.10.13
  - pytorch-lightning=2.1.1
  - pytorch::pytorch=2.1.1=*cuda11.8*
  - ray-default=2.8.1
  - scikit-learn=1.3.2
  - scipy=1.9.3
  - sentencepiece=0.1.99
  - sentence-transformers=2.2.2
  - spacy=3.7.2
  - statsmodels=0.14.0
  - tensorflow-base=2.13.1=cuda118*
  - tensorflow=2.13.1=cuda118*
  - tqdm=4.66
  - transformers=4.35.2
  - xarray=2023.11.0
  - xformers::xformers==0.0.23=*cu11.8*

Are you certain the error you get is the same as the previous error?

When I initially tried adding langchain and langchain-community, I saw that I get a package conflict error (the version of transformers that was pinned conflicts with what langchain-community requires) but I can solve for that error by unpinning transformers and re-running the solve.

Failed to parse json, Expecting value: line 1 column 1 (char 0)
Could not lock the environment for platform linux-64
Could not solve for environment specs
The following packages are incompatible
├─ dill 0.3.7**  is requested and can be installed;
├─ langchain-community is installable and it requires
│  └─ datasets >=2.15.0,<3.0.0 , which can be installed;
├─ python 3.10.13**  is installable and it requires
│  └─ python_abi 3.10.* *_cp310, which can be installed;
└─ transformers 4.35.2**  is not installable because it requires
   ├─ datasets !=2.5.0  with the potential options
   │  ├─ datasets 2.15.0 would require
   │  │  └─ huggingface_hub >=0.18.0 , which can be installed;
   │  ├─ datasets [1.1.3|1.10.0|...|2.2.1] conflicts with any installable versions previously reported;
   │  ├─ datasets 2.2.2 would require
   │  │  └─ dill <0.3.5 , which conflicts with any installable versions previously reported;
   │  ├─ datasets [2.3.2|2.4.0|...|2.6.1] would require
   │  │  └─ dill <0.3.6 , which conflicts with any installable versions previously reported;
   │  ├─ datasets [2.7.0|2.7.1|2.9.0] would require
   │  │  └─ dill <0.3.7 , which conflicts with any installable versions previously reported;
   │  ├─ datasets [2.10.0|2.10.1|...|2.13.1] would require
   │  │  └─ dill >=0.3.0,<0.3.7 , which conflicts with any installable versions previously reported;
   │  └─ datasets [2.16.0|2.16.1] would require
   │     └─ huggingface_hub >=0.19.4 , which can be installed;
   └─ tokenizers >=0.14,<0.15  but there are no viable options
      ├─ tokenizers 0.14.1 would require
      │  └─ python_abi 3.12.* *_cp312, which conflicts with any installable versions previously reported;
      ├─ tokenizers [0.14.0|0.14.1] would require
      │  └─ huggingface_hub >=0.16.4,<0.18 , which conflicts with any installable versions previously reported;
      ├─ tokenizers [0.14.0|0.14.1] would require
      │  └─ python_abi 3.11.* *_cp311, which conflicts with any installable versions previously reported;
      ├─ tokenizers [0.14.0|0.14.1] would require
      │  └─ python_abi 3.8.* *_cp38, which conflicts with any installable versions previously reported;
      └─ tokenizers [0.14.0|0.14.1] would require
         └─ python_abi 3.9.* *_cp39, which conflicts with any installable versions previously reported.

@kevinr while we generally avoid adding packages on a user’s behalf, I have added langchain and langchain-community in this pull request. Once the containers finish building momentarily, you should be able to pull the most recent container image and have access to these libraries.

If there are additional packages you would like to add and you keep running into issues, please post the details of the changes you made and the error messages you are receiving. The approach you should take is to create a new conda-lock file from outside the Docker runtime environment, as opposed to doing anything inside the runtime or modifying the Dockerfile entrypoint itself.

If you continue to run into issues doing so, I would recommend creating a clean conda environment for container development and re-installing the packages necessary to update the environment inside it (from dev-requirements.txt), and perhaps even trying to update your conda installation or switch to using the mini-forge distribution.

Thanks @chrisk-dd!! Will definitely do as you suggest.

@chrisk-dd

We’ve successfully figured out the process for generating files for Conda libraries. Thank you all for your guidance and support in this matter.

During our further work, we’ve come across an additional challenge involving the llama-cpp-python dependency. Please find the details below, along with a request for your suggestions on resolving these issues within the Docker environment.

  • Installation Method: Initially, we thought llama-cpp-python didn’t have a Conda distribution. However, it appears there are some versions available on Conda. Running conda search llama-cpp-python --channel conda-forge reveals some possible versions. Despite this, we still face a significant challenge related to GPU support.
  • GPU Support Issue: The primary concern is enabling GPU support for llama-cpp-python. The standard pip installation command only sets up the CPU version. To activate GPU support, specific parameters must be passed to the GCC compiler. According to the “Installation with Specific Hardware Acceleration” section on the python bindings GitHub, the required command is (might vary depending on the specific hardware used):
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir

Implementing this we believe would require modifying the Dockerfile or the entrypoint script, which might conflict with the existing challenge protocols.

Could you offer insights or recommendations on how to effectively integrate llama-cpp-python with GPU support into the Docker setup?
Thank you in advance for your assistance

Hey @kevinr-

In this case, I would take the following approach:

  1. First, add some tests to test_packages.py that check whether llama_cpp can be imported and has the GPU access you expect. These will fail until the package is correctly installed.

  2. Check to see whether there are any binaries on conda-forge for llama-cpp-python that are pre-compiled for GPU support, specifically for CUDA 11.8. We do this for other libraries in the environment file, e.g., tensorflow=2.13.1=*cuda118*. Different packages have different conventions for how they specify this (some include the . in 11.8, others don’t, some only use cu instead of cuda) so you’ll have to do a bit of digging to figure that out.

    If there are any such binaries, I would try to install those and test whether it works. This might look like modifying environment-gpu.yml to contain a line like

    - llama-cpp-python=*=*cuda118*`
    

    and unpinning packages as necessary to get a solved environment. Once you have a solved environment, build the container (make build) and run the tests from step 1 (make test-container) to see whether it worked.

  3. If that doesn’t work, I’d try adding a - pip: section to the environment file and modifying the Dockerfile so that the values of CMAKE_ARGS and FORCE_CMAKE were set appropriately. Then, similarly try installing and see whether it works.

  4. If the above fails, try modifying the Dockerfile as necessary to get a functioning install. Run tests to ensure it’s working.

Hope this helps getting this sorted. Please note that per the rules (and as noted here), submissions may not use software that is not licensed under an Open Source Initiative license, which the Llama license is not, and so solutions that use Llama models would not qualify for prizes. It seems as though llama-cpp-python supports models beyond Llama, so it’s not unreasonable to submit a PR that adds support for it.

-Chris

Dear @chrisk-dd,

as you suggested we have tried to find a suitable version on conda channels, but unfortunately the only version using cuda118 is incompatible with the other versions of the packages.

We were able instead find a version working with pip.

Before trying to see if it works also in the docker, we have noticed some issues generating the lock file. We have also tried to generate a lock file using the base file of environment-gpu.yml, but seems there are some issues with the channels used. While creating a new env using the file environment-gpu.yml is working fine, re-solve the entire environment using

conda-lock -f environment-gpu.yml --lockfile test.yml

give us “The following packages are not available from current channels”

We have tried to check conda channels priority and also adding the channels manually, but with same error.
Do you have any suggestion on how to setup correctly conda-lock?

    conda-lock -f environment-gpu.yml --lockfile test.yml
Locking dependencies for ['linux-64', 'osx-64', 'osx-arm64', 'win-64']...
INFO:conda_lock.conda_solver:linux-64 using specs ['accelerate 0.25.0.*', 'aiofiles 23.2.1.*', 'aiohttp 3.8.5.*', 'cudatoolkit 11.8.*', 'cupy 12.3.0.*', 'dill 0.3.7.*', 'diskcache 5.6.3.*', 'einops 0.7.0.*', 'faiss-gpu 1.7.4.*', 'gensim 4.3.2.*', 'jsonpickle 3.0.2.*', 'keras 2.13.1.*', 'langchain 0.1.4.*', 'langchain-community 0.0.16.*', 'lightgbm 4.1.0.*', 'loguru 0.7.*', 'more-itertools 10.2.0.*', 'numba 0.58.*', 'numpy 1.25.2.*', 'nvidia::cuda-cudart=11.8', 'pandas 2.1.3.*', 'peft 0.7.1.*', 'pip *', 'polars 0.19.0.*', 'pytest 7.4.*', 'python 3.10.13.*', 'pytorch-lightning 2.1.1.*', 'pytorch::pytorch==2.1.1[build=*cuda11.8*]', 'ray-default 2.8.1.*', 'scikit-learn 1.3.2.*', 'scipy 1.9.3.*', 'sentencepiece 0.1.99.*', 'sentence-transformers 2.2.2.*', 'spacy 3.7.2.*', 'statsmodels 0.14.0.*', 'tensorflow-base 2.13.1 cuda118*', 'tensorflow 2.13.1 cuda118*', 'tqdm 4.66.*', 'transformers 4.37.1.*', 'xarray 2023.11.0.*', 'xformers::xformers==0.0.23[build=*cu11.8*]']
INFO:conda_lock.conda_solver:osx-64 using specs ['accelerate 0.25.0.*', 'aiofiles 23.2.1.*', 'aiohttp 3.8.5.*', 'cudatoolkit 11.8.*', 'cupy 12.3.0.*', 'dill 0.3.7.*', 'diskcache 5.6.3.*', 'einops 0.7.0.*', 'faiss-gpu 1.7.4.*', 'gensim 4.3.2.*', 'jsonpickle 3.0.2.*', 'keras 2.13.1.*', 'langchain 0.1.4.*', 'langchain-community 0.0.16.*', 'lightgbm 4.1.0.*', 'loguru 0.7.*', 'more-itertools 10.2.0.*', 'numba 0.58.*', 'numpy 1.25.2.*', 'nvidia::cuda-cudart=11.8', 'pandas 2.1.3.*', 'peft 0.7.1.*', 'pip *', 'polars 0.19.0.*', 'pytest 7.4.*', 'python 3.10.13.*', 'pytorch-lightning 2.1.1.*', 'pytorch::pytorch==2.1.1[build=*cuda11.8*]', 'ray-default 2.8.1.*', 'scikit-learn 1.3.2.*', 'scipy 1.9.3.*', 'sentencepiece 0.1.99.*', 'sentence-transformers 2.2.2.*', 'spacy 3.7.2.*', 'statsmodels 0.14.0.*', 'tensorflow-base 2.13.1 cuda118*', 'tensorflow 2.13.1 cuda118*', 'tqdm 4.66.*', 'transformers 4.37.1.*', 'xarray 2023.11.0.*', 'xformers::xformers==0.0.23[build=*cu11.8*]']                                                                                                                      Could not lock the environment for platform osx-64
The following packages are not available from current channels:

  - nvidia::cuda-cudart==11.8                                                                                                                                                                                                                          - cudatoolkit=11.8*                                                                                                                                                                                                                                  - cupy=12.3.0*
  - faiss-gpu=1.7.4*
  - pytorch::pytorch==2.1.1[build=*cuda11.8*]
  - tensorflow==2.13.1[build=cuda118*]
  - tensorflow-base==2.13.1[build=cuda118*]
  - xformers::xformers==0.0.23[build=*cu11.8*]

Current channels:

  - https://conda.anaconda.org/conda-forge
  - file:///tmp/tmpwvhp9onc
  - https://conda.anaconda.org/nvidia                                                                                                                                                                                                                  - https://conda.anaconda.org/pytorch
  - https://conda.anaconda.org/xformers
  - https://repo.anaconda.com/pkgs/free

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.

    Command: ['/home/pescu/miniconda3/bin/conda', 'create', '--prefix', '/tmp/tmp77tgypin/prefix', '--dry-run', '--json', '--override-channels', '--channel', 'conda-forge', '--channel', 'file:///tmp/tmpwvhp9onc', 'accelerate 0.25.0.*', 'aiofiles 23.2.1.*', 'aiohttp 3.8.5.*', 'cudatoolkit 11.8.*', 'cupy 12.3.0.*', 'dill 0.3.7.*', 'diskcache 5.6.3.*', 'einops 0.7.0.*', 'faiss-gpu 1.7.4.*', 'gensim 4.3.2.*', 'jsonpickle 3.0.2.*', 'keras 2.13.1.*', 'langchain 0.1.4.*', 'langchain-community 0.0.16.*', 'lightgbm 4.1.0.*', 'loguru 0.7.*', 'more-itertools 10.2.0.*', 'numba 0.58.*', 'numpy 1.25.2.*', 'nvidia::cuda-cudart=11.8', 'pandas 2.1.3.*', 'peft 0.7.1.*', 'pip *', 'polars 0.19.0.*', 'pytest 7.4.*', 'python 3.10.13.*', 'pytorch-lightning 2.1.1.*', 'pytorch::pytorch==2.1.1[build=*cuda11.8*]', 'ray-default 2.8.1.*', 'scikit-learn 1.3.2.*', 'scipy 1.9.3.*', 'sentencepiece 0.1.99.*', 'sentence-transformers 2.2.2.*', 'spacy 3.7.2.*', 'statsmodels 0.14.0.*', 'tensorflow-base 2.13.1 cuda118*', 'tensorflow 2.13.1 cuda118*', 'tqdm 4.66.*', 'transformers 4.37.1.*', 'xarray 2023.11.0.*', 'xformers::xformers==0.0.23[build=*cu11.8*]']
    STDOUT:
{
  "allow_retry": false,
  "caused_by": "None",
  "channel_urls": [
    {
      "auth": null,
      "location": "conda.anaconda.org",
      "name": "conda-forge",
      "package_filename": null,
      "platform": null,
      "scheme": "https",
      "token": null
    },
    {
      "auth": null,
      "location": "/tmp",
      "name": "tmpwvhp9onc",
      "package_filename": null,
      "platform": null,                                                                                                                                                                                                                                "scheme": "file",                                                                                                                                                                                                                                "token": null
    },
    {
      "auth": null,
      "location": "conda.anaconda.org",
      "name": "nvidia",
      "package_filename": null,
      "platform": null,
      "scheme": "https",
      "token": null
    },
    {
      "auth": null,
      "location": "conda.anaconda.org",
      "name": "pytorch",
      "package_filename": null,
      "platform": null,
      "scheme": "https",
      "token": null
    },
    {
      "auth": null,
      "location": "conda.anaconda.org",
      "name": "xformers",
      "package_filename": null,
      "platform": null,
      "scheme": "https",
      "token": null
    },
    {
      "auth": null,
      "location": "repo.anaconda.com",
      "name": "pkgs/free",
      "package_filename": null,
      "platform": null,
      "scheme": "https",
      "token": null
    }
  ],
  "channels_formatted": "  - https://conda.anaconda.org/conda-forge\n  - file:///tmp/tmpwvhp9onc\n  - https://conda.anaconda.org/nvidia\n  - https://conda.anaconda.org/pytorch\n  - https://conda.anaconda.org/xformers\n  - https://repo.anaconda.com/pkgs/free",
  "error": "PackagesNotFoundError: The following packages are not available from current channels:\n\n  - nvidia::cuda-cudart==11.8\n  - cudatoolkit=11.8*\n  - cupy=12.3.0*\n  - faiss-gpu=1.7.4*\n  - pytorch::pytorch==2.1.1[build=*cuda11.8*]\n  - tensorflow==2.13.1[build=cuda118*]\n  - tensorflow-base==2.13.1[build=cuda118*]\n  - xformers::xformers==0.0.23[build=*cu11.8*]\n\nCurrent channels:\n\n  - https://conda.anaconda.org/conda-forge\n  - file:///tmp/tmpwvhp9onc\n  - https://conda.anaconda.org/nvidia\n  - https://conda.anaconda.org/pytorch\n  - https://conda.anaconda.org/xformers\n  - https://repo.anaconda.com/pkgs/free\n\nTo search for alternate channels that may provide the conda package you're\nlooking for, navigate to\n\n    https://anaconda.org\n\nand use the search bar at the top of the page.\n",
  "exception_name": "PackagesNotFoundError",
  "exception_type": "<class 'conda.exceptions.PackagesNotFoundError'>",
  "message": "The following packages are not available from current channels:\n\n  - nvidia::cuda-cudart==11.8\n  - cudatoolkit=11.8*\n  - cupy=12.3.0*\n  - faiss-gpu=1.7.4*\n  - pytorch::pytorch==2.1.1[build=*cuda11.8*]\n  - tensorflow==2.13.1[build=cuda118*]\n  - tensorflow-base==2.13.1[build=cuda118*]\n  - xformers::xformers==0.0.23[build=*cu11.8*]\n\nCurrent channels:\n\n  - https://conda.anaconda.org/conda-forge\n  - file:///tmp/tmpwvhp9onc\n  - https://conda.anaconda.org/nvidia\n  - https://conda.anaconda.org/pytorch\n  - https://conda.anaconda.org/xformers\n  - https://repo.anaconda.com/pkgs/free\n\nTo search for alternate channels that may provide the conda package you're\nlooking for, navigate to\n\n    https://anaconda.org\n\nand use the search bar at the top of the page.\n",
  "packages": [
    "nvidia::cuda-cudart==11.8",
    "cudatoolkit=11.8*",
    "cupy=12.3.0*",
    "faiss-gpu=1.7.4*",
    "pytorch::pytorch==2.1.1[build=*cuda11.8*]",
    "tensorflow==2.13.1[build=cuda118*]",
    "tensorflow-base==2.13.1[build=cuda118*]",
    "xformers::xformers==0.0.23[build=*cu11.8*]"
  ],
  "packages_formatted": "  - nvidia::cuda-cudart==11.8\n  - cudatoolkit=11.8*\n  - cupy=12.3.0*\n  - faiss-gpu=1.7.4*\n  - pytorch::pytorch==2.1.1[build=*cuda11.8*]\n  - tensorflow==2.13.1[build=cuda118*]\n  - tensorflow-base==2.13.1[build=cuda118*]\n  - xformers::xformers==0.0.23[build=*cu11.8*]"
}


Traceback (most recent call last):
  File "/usr/local/bin/conda-lock", line 8, in <module>
    sys.exit(main())
  File "/usr/lib/python3/dist-packages/click/core.py", line 1128, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1053, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1659, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 1395, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 754, in invoke
    return __callback(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/conda_lock/conda_lock.py", line 1398, in lock
    lock_func(
  File "/usr/local/lib/python3.9/dist-packages/conda_lock/conda_lock.py", line 1106, in run_lock
    make_lock_files(
  File "/usr/local/lib/python3.9/dist-packages/conda_lock/conda_lock.py", line 393, in make_lock_files
    fresh_lock_content = create_lockfile_from_spec(
  File "/usr/local/lib/python3.9/dist-packages/conda_lock/conda_lock.py", line 834, in create_lockfile_from_spec
    deps = _solve_for_arch(
  File "/usr/local/lib/python3.9/dist-packages/conda_lock/conda_lock.py", line 748, in _solve_for_arch
    conda_deps = solve_conda(
  File "/usr/local/lib/python3.9/dist-packages/conda_lock/conda_solver.py", line 157, in solve_conda
    dry_run_install = solve_specs_for_arch(
  File "/usr/local/lib/python3.9/dist-packages/conda_lock/conda_solver.py", line 369, in solve_specs_for_arch
    proc.check_returncode()
  File "/usr/local/lib/python3.9/dist-packages/conda_lock/_vendor/poetry/utils/_compat.py", line 168, in check_returncode
    raise CalledProcessError(
conda_lock._vendor.poetry.utils._compat.CalledProcessError: Command '['/home/pescu/miniconda3/bin/conda', 'create', '--prefix', '/tmp/tmp77tgypin/prefix', '--dry-run', '--json', '--override-channels', '--channel', 'conda-forge', '--channel', 'file:///tmp/tmpwvhp9onc', 'accelerate 0.25.0.*', 'aiofiles 23.2.1.*', 'aiohttp 3.8.5.*', 'cudatoolkit 11.8.*', 'cupy 12.3.0.*', 'dill 0.3.7.*', 'diskcache 5.6.3.*', 'einops 0.7.0.*', 'faiss-gpu 1.7.4.*', 'gensim 4.3.2.*', 'jsonpickle 3.0.2.*', 'keras 2.13.1.*', 'langchain 0.1.4.*', 'langchain-community 0.0.16.*', 'lightgbm 4.1.0.*', 'loguru 0.7.*', 'more-itertools 10.2.0.*', 'numba 0.58.*', 'numpy 1.25.2.*', 'nvidia::cuda-cudart=11.8', 'pandas 2.1.3.*', 'peft 0.7.1.*', 'pip *', 'polars 0.19.0.*', 'pytest 7.4.*', 'python 3.10.13.*', 'pytorch-lightning 2.1.1.*', 'pytorch::pytorch==2.1.1[build=*cuda11.8*]', 'ray-default 2.8.1.*', 'scikit-learn 1.3.2.*', 'scipy 1.9.3.*', 'sentencepiece 0.1.99.*', 'sentence-transformers 2.2.2.*', 'spacy 3.7.2.*', 'statsmodels 0.14.0.*', 'tensorflow-base 2.13.1 cuda118*', 'tensorflow 2.13.1 cuda118*', 'tqdm 4.66.*', 'transformers 4.37.1.*', 'xarray 2023.11.0.*', 'xformers::xformers==0.0.23[build=*cu11.8*]']' returned non-zero exit status 1.

as you suggested we have tried to find a suitable version on conda channels, but unfortunately the only version using cuda118 is incompatible with the other versions of the packages.

Have you tried unpinning the exactly pinned versions for any conflicting packages? This is acceptable (and sometimes necessary) when adding new packages, provided the required version changes to other packages are not significant.

Do you have any suggestion on how to setup correctly conda-lock?

You should use make update-lockfiles or follow the conventions established there exactly. In your one-off call to conda-lock, you fail to specify that you only want to build for linux-64, and you’ll notice that your code fails when you try to build for osx-64 (it makes sense that many of these channels are not available for Mac architectures). I would recommend using the make command to build the lockfile, or copy-pasting the exact command used in the Makefile.