Home
CV
Experience
Education
Projects
Bookmarks
Investments
Contact
Blog
Welcome! Type "help" for available commands.
$
Loading terminal interface...
LLM Inference Bookmarks | William Callahan - Bookmarks
Similar Content
✕
−
+
~/bookmarks
A collection of bookmarks filtered by the tag "LLM Inference".
Filter by:
AMD Strix Halo
C/C++ Implementation
GGUF Models
Hardware Optimization
Llama.cpp
LLM Inference
Local LLMs
Quantization Techniques
Ryzen AI
+3 More
Clear filter
All Stories
github.com
GitHub - ubergarm/llama.cpp at ug/port-sweep-bench
LLM Inference
GGUF Models
Llama.cpp
Mar 31, 2026
linkedin.com
Running 122B-Parameter LLMs Locally on AMD Strix Halo for OpenClaw:...
Local LLMs
AMD Strix Halo
LLM Inference
Mar 21, 2026