Name: | python3-ramalama |
---|---|
Version: | 0.6.1 |
Release: | 1.el10_0 |
Architecture: | noarch |
Group: | Unspecified |
Size: | 398015 |
License: | MIT |
RPM: | python3-ramalama-0.6.1-1.el10_0.noarch.rpm |
Source RPM: | python-ramalama-0.6.1-1.el10_0.src.rpm |
Build Date: | Thu Feb 27 2025 |
Build Host: | build-ol10-x86_64.oracle.com |
Vendor: | Oracle America |
URL: | https://github.com/containers/ramalama |
Summary: | RamaLama is a command line tool for working with AI LLM models |
Description: | RamaLama is a command line tool for working with AI LLM models On first run RamaLama inspects your system for GPU support, falling back to CPU support if no GPUs are present. It then uses container engines like Podman to pull the appropriate OCI image with all of the software necessary to run an AI Model for your systems setup. This eliminates the need for the user to configure the system for AI themselves. After the initialization, RamaLama will run the AI Models within a container based on the OCI image. |
- Update to 0.6.1 upstream release
- Update to 0.6.0 upstream release
- Update to 0.5.2 release
- Update to ramalama-0.5.0
- Fix spec file to get CentOS Stream 10 building
- Needed to fix to match upstream
- Do manual addition of PR #4 items
- Fix changes to spec file description and other items found in review
- Incorporate upstream PR
- Switch to go-md2man instead of full name