A collection of bookmarks filtered by the tag "Expert Parallelism".
Filter by:
Discover More
Related Bookmarks
dev.synthetic.new
Synthetic LLM Hosted Models
Chat with open-source models privately
z.ai
GLM-5: From Vibe Coding to Agentic Engineering
GLM-5 is a 744B-parameter MoE model (40B active) from Zhipu AI, scaled up from GLM-4.5's 355B with 28.5T pre-training tokens and DeepSeek Sparse Atten...
openrouter.ai
Trinity Mini (free) - API, Providers, Stats
Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model featuring 128 experts with 8 active per token. Engineered for eff...
Java SDK for Apple Maps Server API — geocoding, search, directions
Book Finder (findmybook.net)
Book search and recommendation engine with OpenAI integration
Related Books
The RLHF Book
Nathan Lambert
This is a guide to reinforcement learning from human feedback (RLHF), alignment, and post-training for Large Language Models (LLMs). Author Nathan Lam...
Related Investments
WeLoveNoCode
Platform connecting businesses with no-code developers and tools.
Toucan
Toucan was a language learning Chrome extension for in-browser language learning.
AngelList
Platform connecting startups with investors, talent, and resources for fundraising and growth.
Build a Reasoning Model (From Scratch)
Sebastian Raschka
Description A deep dive into the architecture and implementation of AI models capable of logical deduction and multi-step reasoning. It explains how t...
Secrets of the JavaScript Ninja, 2nd Edition
Josip Maras, John Resig +1
More than ever, the web is a universal platform for all types of applications, and JavaScript is the language of the web. For anyone serious about web...