Honorable mention
RNN-meets-transformer linear-attention LM architecture running with O(n) memory—unique open line for long-context and embedded inference.
llmarchitecturelinear-attentionopen-weights
Filter by platform, license text, maturity, maintenance cadence, and editorial tags like privacy-focused or self-hosted. Search matches names, summaries, tags, and use cases.
1 tool match your filters
RNN-meets-transformer linear-attention LM architecture running with O(n) memory—unique open line for long-context and embedded inference.