SillyTavern Weekly: DeepSeek V4, Kimi K2.6, GLM-5.1 — 4/27/26

SillyTavern weekly news: Seven days of morning briefings compiled so you don’t have to scroll Reddit at 4am. You’re welcome.

The SillyTavern weekly news of April 20 to 27, 2026 was a big one if you live and breathe model releases and community drama. DeepSeek V4 dropped with a flashier little brother. Z.AI kept everyone in a permanent state of confusion about whether their legacy coding plans were about to get gutted. Kimi K2.6 launched and divided the room like every Kimi release does. And underneath all the noise, the SillyTavern GitHub ship stayed steady with critical bug fixes and a shiny new model on the approved list.

Let’s get into it.

DeepSeek V4 Landed!

The biggest model news of the week was absolutely DeepSeek dropping V4 Flash and V4 Pro on their official platform. April 24, out of nowhere, and the community immediately went feral.

V4 Pro came out at a *75% discount*, which sent half the RP Twitter (okay, Reddit) into a buying frenzy. DeepSeek V4 Flash and Pro both hit the API menus and people started running guides within hours.

Speaking of which: someone posted a [DeepSeek V4 RP Guide] covering how to switch between character immersion mode and pure analysis thinking mode. That’s the kind of guide that usually takes the community a week to figure out collectively, so whoever wrote it early deserves a coffee or eight.

By the end of the week, DeepSeek was asking the English-speaking RP community directly for feedback on how they handle roleplay scenarios. A Reddit thread titled “DeepSeek asking for feedback on RP from the English speaking community” popped up on April 25, and it was active all the way through Sunday. Interesting signal that the team is actually paying attention to how these models get used in the wild.

Not all sunshine and discounts though. Some users reported that **DeepSeek V4 Pro was randomly injecting numbers into outputs**. That bug thread was live by April 26 and didn’t look resolved by the time this digest goes out. Mileage may vary, as always.

One thread that got way more engagement than it probably deserved: “DeepSeek said gooners on top.” I am not going to explain this. You had to be there.

Z.AI Drama

Remember last week’s Z.AI / GLM saga? It didn’t cool off. If anything, it got more tangled.

A thread titled “Yet another Zai/GLM ban topic” showed up April 21, which tells you everything you need to know about how resolved that situation is. Meanwhile, someone spotted that **GLM’s coding plan docs now officially list SillyTavern as authorized use**. That’s a legitimate win for the project getting recognized upstream. GLM-5.1 was added to SillyTavern’s models list during this week (pull #5361), and the official nod from the model provider side matters.

On the flip side, **ZAI legacy coding plan subscribers got confirmation their plans are moving to new lower limits** once their current subscription runs out. April 22 thread confirmed this, and it sent the usual ripple of panic through the subreddits. Legacy plan holders are essentially on borrowed time at the old rates.

Is GLM censored? Someone asked it directly on April 23. The thread didn’t give a clean yes or no, because the answer is “it depends on the endpoint, the setting, and possibly the moon phase.”

If you want the full GLM-4-7 preset rundown and the lore on Stab’s EDH configs, I wrote about it here. GLM-5.1 is newer, but the preset patterns carry over.

Kimi K2.6

April 20 was Kimi K2.6 release day, and the takes came in two waves.

First wave (April 20-21): “Kimi 2.6 isn’t really worth it.”

Second wave (April 22): “Kimi K2.6 is the best LLM for slowburn.”

This tracks. Kimi’s strengths have always been longer context, sustained coherence over extended sessions, and the ability to hold a plot thread without dropping it for three messages. That’s slowburn territory. The early takes were probably from people running it in quick-chat benchmark mode.

If you’re running long-form RP sessions, K2.6 is worth a look. If you’re doing quick back-and-forth, you probably won’t notice the upgrade from K2.5.

The LLM Arena RP Benchmark 2

Someone in the community dropped **LLM Arena – RP Benchmark 2!** on April 25. This is the community’s own framework for evaluating how different models perform in actual roleplay scenarios, separate from standard benchmarks that don’t measure storytelling quality for beans.

These community benchmark threads are always worth a skim even if you don’t participate. The results threads usually surface quirks about models that official benchmarks miss, like “this model refuses to write action beats longer than two sentences” or “that one goes purple prose at the drop of a hat.”

SillyTavern 1.17.0: Still Rolling Out

The 1.17.0 release from March 28 is still the most recent update. In case you missed:

– **Async file deletion bugs fixed** in the assets endpoint (Tony Gies, pull #5363)
– **Secrets conditionally included in user data backups** (pull #5364) — this one matters if you care about backup security
– **GLM-5.1 added** to the official models list

The CI pipeline also got upgraded to **Node 24**. No drama there, just infrastructure tidying that keeps the project from falling behind on runtime versions.

Community Goodies: Lorebooks, Character Cards, and Game Makers

Worth highlighting a few community contributions that got buried under the other SillyTavern weekly news:

**ZZZ Lorebook** (April 27): Someone dropped a “pretty big, general ZZZ lorebook for anybody interested.” If you’re running ZZZ characters or setting in SillyTavern, this is the kind of thing worth bookmarking.

**MVU Game Maker v0.95** (April 25): Slice of life and dating sim creation tool with persistent multi-character stat tracking. Not strictly SillyTavern, but adjacent enough that if you’re into structured narrative games, it’s worth a look.

**Freaky Frankenstein BOLT Preview** (April 27): A character card preview that showed up the morning this digest was being compiled. Looked wild. Looked exactly as advertised. Check the Reddit thread if that’s your genre.

The Shit That Didn’t Fit Anywhere Else

A few threads that deserve acknowledgment:

**Nvidia free API bans**: Someone asked if they’d been banned from the free Nvidia API. Happened to other people too. If you’re using the free tier, watch your request counts.
**DeepSeek V4 rate limits**: “Rate limit exceeded” posts started cropping up April 26, probably correlated with the 75% discount driving a rush of new users.
**DeepSeek official platform API V4 questions**: Users on the official platform were asking if they were already on V4. Confusing because the rollout was staggered.
**Marinara Engine**: Marinara making big moves as always. I haven’t had a chance to explore this one but it’s huge. I didn’t want to make a big write up until I checked it out personally, but definitely go and check it out!

That Wraps the Week!

Seven days of SillyTavern community output condensed into one place. DeepSeek V4 is the headline. Z.AI drama is the background radiation. Kimi K2.6 is either great or mid depending on your use case. GitHub was quiet but the commits that landed were the right ones: security fixes, model additions, and a runtime upgrade.

The community is healthy. Discourse is active. Someone always has a lorebook and someone always has a complaint about rate limits. Check back for more SillyTavern weekly news every week, right here.

Consume. Create. Obsess.

More tools, guides, and rabbit holes at rpfiend.com.

Leave a Reply

Your email address will not be published. Required fields are marked *