I'm tired of LLM bullshitting. So I fixed it. (codeberg.org)
from SuspciousCarrot78@lemmy.world to privacy@lemmy.ml on 22 Jan 13:41
https://lemmy.world/post/41992574

Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - link.springer.com/article/…/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

#privacy

threaded - newest

itkovian@lemmy.world on 22 Jan 13:45 next collapse

Based AF. Can anyone more knowledgeable explain how it works? I am not able to understand.

SuspciousCarrot78@lemmy.world on 22 Jan 13:48 collapse

Hell yes I can explain. What would you like to know.

itkovian@lemmy.world on 22 Jan 13:52 collapse

As I understand it, it corrects the output of LLMs. If so, how does it actually work?

SuspciousCarrot78@lemmy.world on 22 Jan 14:42 collapse

Good question.

It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.

There are basically three modes, each stricter than the last. The default is “serious mode” (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.

Additionally, Vodka (made up of two sub-modules - “cut the crap” and “fast recall”) operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what’s been said. That summary is not LLM generated summary either - it’s concatenation (dumb text matching), so no made up vibes.

Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.

It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)

And that’s the baseline

In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).

When you >>attach <kb>, the router gets stricter again. Now the model is instructed to answer only from the attached documents.

Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.

The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)

TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.

Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.

It’s all of the three above PLUS a counter-factual sweep.

It runs ONLY on stuff you’ve promoted into the vault.

What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!

In step 1, it runs that past the thinker model. The answer is then passed onto a “critic” model (different llm). That model has the job of looking at the thinkers output and say “bullshit - what about xyz?”.

It sends that back to the thinker…who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!

TL;DR:

The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I’ve given you all the tools I could think of to do that).

Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.

itkovian@lemmy.world on 22 Jan 15:16 collapse

That is much clearer. Thank you for making this. It actually makes LLMs useful with much lesser downsides.

SuspciousCarrot78@lemmy.world on 22 Jan 15:22 collapse

God, I hope so. Else I just pissed 4 months up the wall and shouted a lot of swears at my monitor for nada :)

Let me know if it works for you

itkovian@lemmy.world on 22 Jan 15:23 collapse

Will do.

FrankLaskey@lemmy.ml on 22 Jan 13:53 next collapse

This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?

[deleted] on 22 Jan 14:13 next collapse

.

SuspciousCarrot78@lemmy.world on 22 Jan 17:19 collapse

Comment removed by (auto-mod?) cause I said sexy bot. Weird.

Restating again: On the stuff you use the pipeline/s on? About 85-90% in my tests. Just don’t GIGO (Garbage in, Garbage Out) your source docs…and don’t use a dumb LLM. That’s why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).

7toed@midwest.social on 23 Jan 06:56 next collapse

abilterated one

Please elaborate, that alone piqued my curiosity. Pardon me if I couldve searched

SuspciousCarrot78@lemmy.world on 23 Jan 07:34 collapse

Yes of course.

Abliterated is a technical LLM term meaning “safety refusals removed”.

Basically, abliteration removes the security theatre that gets baked into LLM like chatGPT.

I don’t like my tools deciding for me what I can and cannot do with them.

I decide.

Anyway, the model I use has been modified with a newer, less lobotomy inducing version of abliteration (which previously was a risk).

huggingface.co/DavidAU/…/main

According to validation I’ve seen online (and of course, I tested it myself), it’s lost next to zero “IQ” and dropped refusals by about…90%.

BEFORE: Initial refusals: 99/100

AFTER: Refusals: 8/100 [lower is better], KL divergence: 0.02 (less than 1 is great, “0” is perfect.)

In fact, in some domains it’s actually a touch smarter, because it doesn’t try to give you “perfect” model answers. Maths reasoning for example, where the answer is basically impossible, it will say “the answer is impossible. Here’s the nearest workable solution based on context” instead of getting stuck in a self-reinforcing loop, trying to please you, and then crashing.

In theory, that means you could ask it for directions on how to cook Meth and it would tell you.

I’m fairly certain the devs didn’t add the instructions for that in there, but if they did, the LLM won’t “sorry, I can’t tell you, Dave”.

Bonus: with my harness over the top, you’d have an even better idea if it was full of shit (it probably would be, because, again, I’m pretty sure they don’t train LLM on Breaking Bad).

Extra double bonus: If you fed it exact instructions for cooking meth, using the methods I outlined? It will tell you exactly how to cook Meth, 100% of the time.

Say…you…uh…wanna cook some meth? :P

PS: if you’re more of a visual learner, this might be a better explanation

www.youtube.com/watch?v=gr5nl3P4nyM

7toed@midwest.social on 23 Jan 16:00 collapse

Thank you again for your explainations. After being washed up with everything AI, I’m genuinely excited to set this up. I know what I’m doing today! I will surely be back

SuspciousCarrot78@lemmy.world on 23 Jan 16:03 collapse

Please enjoy. Make sure you use >>FR mode at least once. You probably won’t like the seed quotes but maybe just maybe you might and I’ll be able to hear the “ha” from here.

DoctimusLime@lemmygrad.ml on 24 Jan 13:01 collapse

This is so cool to read about, thx for doing what you and pls keep doing it! We need high quality and trustworthy information now more than ever I think. Damn nzs spewing their propaganda everywhere and radicalising the vulnerable. Thanks!

BaroqueInMind@piefed.social on 22 Jan 13:54 next collapse

I have no remarks, just really amused with your writing in your repo.

Going to build a Docker and self host this shit you made and enjoy your hard work.

Thank you for this!

SuspciousCarrot78@lemmy.world on 22 Jan 14:06 next collapse

Thank you <3

Please let me know how it works…and enjoy the >>FR settings. If you’ve ever wanted to trolled by Bender (or a host of other 1990s / 2000s era memes), you’ll love it.

Diurnambule@jlai.lu on 23 Jan 07:09 collapse

Same sentiment. Tonight it run on my systems XD.

SuspciousCarrot78@lemmy.world on 23 Jan 08:02 collapse

There are literally dozens of us. DOZENS!

I’m on a potato, so I can’t attach it to something super sexy, like a 405B or a MoE.

If you do, please report back.

PS: You may see (in the docs) occasional references that slipped passed me to MoA. That doesn’t stand for Mixture of Agents. That stood for “Mixture of Assholes”. That’s always been my mental model for this.

Or, in the language of my people, this was my basic design philosophy:

YOU (question)-> ROUTER+DOCS (Ah shit, here we go again. I hate my life)

|

ROUTER+DOCS -> Asshole 1: Qwen (“I’m right”)

|

ROUTER+DOCS -> Asshole 2: Phi (“No, I’m right”)

|

ROUTER+DOCS -> Asshole 3: Nanbeige (“Idiots, I’m right!”)

|

ROUTER+DOCS (Jesus, WTF. I need booze now) <- (all assholes)

|

–> YOU (answer)

(this could have been funnier in the ASCII actually worked but man…Lemmy borks that)

EDIT: If you want to be boring about it, it’s more like this

pastebin.com/gNe7bkwa

PS: If you like it, let other people in other places know about it.

Terces@lemmy.world on 22 Jan 14:01 next collapse

Fuck yeah…good job. This is how I would like to see “AI” implemented. Is there some way to attach other data sources? Something like a local hosted wiki?

SuspciousCarrot78@lemmy.world on 22 Jan 14:05 collapse

Hmm. I dunno - never tried. I suppose if the wiki could be imported in a compatible format…it should be able to chew thru it just fine. Wiki’s are usually just gussied up text files anyway :) Drop the contents of your wiki in there a .md’s and see what it does

SpaceNoodle@lemmy.world on 22 Jan 14:30 collapse

I wanna just plug Wikipedia into this and see if it turns an LLM into something useful for the general case.

SuspciousCarrot78@lemmy.world on 22 Jan 14:50 collapse

LOL. Don’t do that. Wikipedia is THE nosiest source.

Would you like me to show you HOW and WHY the SUMM pathway works? I built it after I tried a “YOLO wikipedia in that shit - done, bby!”. It…ended poorly

MNByChoice@midwest.social on 22 Jan 14:59 next collapse

Not OP, but random human.

Glad you tried the “YOLO Wikipeida”, and are sharing that fact as it saves the rest of us time. :)

SuspciousCarrot78@lemmy.world on 22 Jan 15:15 collapse

Don’t YOLO wikipedia, folks. It hurts.

SpaceNoodle@lemmy.world on 22 Jan 15:07 collapse

Yes please

SuspciousCarrot78@lemmy.world on 22 Jan 15:32 collapse

Of course. Here is a copy paste from my now defunct reddit account. Feel free to follow the pastebin links to see what v1 of SUMM did. Whats in the router uses is v1.1:

########

My RAG

I’ve recently been playing around with making my SLM’s more useful and reliable. I’d like to share some of the things I did, so that perhaps it might help someone else in the same boat.

Initially, I had the (obvious, wrong) idea that “well, shit, I’ll just RAG dump Wikipedia and job done”. I trust it’s obvious why that’s not a great idea (retrieval gets noisy, chunks lack context, model spends more time sifting than answering).

Instead, I thought to myself “why don’t I use the Didactic Method to teach my SLMs what the ground truth is, and then let them argue from there?”. After all, Qwen3-4B is pretty good with its reasoning…it just needs to not start from a position of shit.

The basic work flow -

TLDR

  • Use a strong model to write clean, didactic notes from source docs.
  • Distill + structure those notes with a local 8B model.
  • Load distilled notes into RAG (I love you, Qdrant).
  • Use a 4B model with low temp + strict style as the front‑end brain.
  • Let it consult RAG both for facts and for “who should answer this?” policy.

Details

(1) Create a “model answer” --> this involves creating a summary of source material (like say, markdown document explaining launch flags for llama.cpp). You can do this manually or use any capable local model to do it, but for my testing, I fed the source info straight in Gippity 5 with specfic “make me a good summary of this, hoss” prompt

Like so: https://pastebin.com/FaAB2A6f

(2) Save that output as SUMM-llama-flags.md. You can copy paste it into Notepad++ and do it manually if need to.

(3) Once the summary has been created, use a local “extractor” and “formatter” model to batch extract high yield information (into JSON) and then convert that into a second distillation (markdown). I used Qwen3-8b for this.

Extract prompt https://pastebin.com/nT3cNWW1

Format prompt (run directly on that content after model has finished its extraction) https://pastebin.com/PNLePhW8

(4) Save that as DISTILL-llama-flags.md.

(5) Drop Temperature low (0.3) and made Qwen3-4B cut the cutsey imagination shit (top_p = 0.9, top_k=0), not that it did a lot of that to begin with.

(6) Import DISTILL-llama-flags.md into your RAG solution (god I love markdown).

Once I had that in place, I also created some “fence around the law” (to quote Judaism) guard-rails and threw them into RAG. This is my question meta, that I can append to the front (or back) of any query. Basically, I can ask the SLM “based on escalation policy and the complexity of what I’m asking you, who should answer this question? You or someone else? Explain why.”

https://pastebin.com/rDj15gkR

(I also created another “how much will this cost me to answer with X on Open Router” calculator, a “this is my rig” ground truth document etc but those are sort of bespoke for my use-case and may not be generalisable. You get the idea though; you can create a bunch of IF-THEN rules).

The TL:DR of all this -

With a GOOD initial summary (and distillation) you can make a VERY capable little brain, that will argue quite well from first principles. Be aware, this can be a lossy pipeline…so make sure you don’t GIGO yourself into stupid. IOW, trust but verify and keep both the source material AND SUMM-file.md until you’re confident with the pipeline. (And of course, re-verify anything critical as needed).

I tested, and retested, and re-retest a lot (literally 28 million tokens on OR to make triple sure), doing a bunch of adversarial Q&A testing, side by side with GPT5, to triple check that this worked as I hoped it would.

The results basically showed a 9/10 for direct recall of facts, 7-8/10 for “argue based on my knowledge stack” or “extrapolate based on knowledge stack + reference to X website” and about 6/10 on “based on knowledge, give me your best guess about X adjacent topic”. That’s a LOT better than just YOLOing random shit into Qdrant…and orders of magnitude better than relying on pre-trained data.

Additionally, I made this this cute little system prompt to give me some fake confidence -

Tone: neutral, precise, low-context.

Rules:

  • Answer first. No preamble. ≤3 short paragraphs.
  • Minimal emotion or politen
SlimePirate@lemmy.dbzer0.com on 22 Jan 14:08 next collapse

Voodoo is not magic btw, it was sullied by colonists

SuspciousCarrot78@lemmy.world on 22 Jan 15:11 next collapse

Damn Englishmen. With their…ways.

SlimePirate@lemmy.dbzer0.com on 22 Jan 18:29 collapse

I think this was was done by France, not better though

SuspciousCarrot78@lemmy.world on 22 Jan 19:00 collapse

Ok, but can we just blame the English anyway?

tomenzgg@midwest.social on 23 Jan 06:05 next collapse

As someone of Haitian descent, no; the French get hardly enough blame, as it is.

Pudutr0n@lemmy.world on 23 Jan 10:59 collapse

Always

frightful_hobgoblin@lemmy.ml on 23 Jan 10:39 collapse

wym?

db0@lemmy.dbzer0.com on 22 Jan 14:15 next collapse

Any chance you can also make it compatible with AI Horde?

SuspciousCarrot78@lemmy.world on 22 Jan 15:09 collapse

I’ll look. I have no idea what that is.

db0@lemmy.dbzer0.com on 22 Jan 15:24 collapse

In a nutshell: Local-LLMs, crowdsourced at scale.

FauxLiving@lemmy.world on 22 Jan 20:50 collapse

AI Horde has a OpenAI compatible REST API (oai.aihorde.net). They say that it doesn’t support the full feature set of their native API, but will almost assuredly work with this.

OP manually builds the oapi JSON payload and then uses the python requests library to handle the request.

The fields they’re using match the documentation on oai.aihorde.net/docs

You would need to add a header with your AI Horde API key. Looks like that would only need to be done in router_fastapi.py - call_model_prompt() (line 269) and call_model_messages() (line 303) and then everything else is setup according to documentation

SuspciousCarrot78@lemmy.world on 23 Jan 09:40 collapse

Fuck, I’m an all seeing, prescient wizard! Neat!

Uh, I mean… “yes of course, I intended to have Ai Horde piping baked in from the get go”.

Do let me know if my shit works with their shit to make…not shit.

#####################

EDIT: For fun, I asked my router to ask Qwen3-4B 2507 WTF Ai Horde was. I’m showing this for interest sake, because 1) shows you how the thing works IRL 2) shows you how and when it’s WRONG in a LOUD way, if you know how to read it.

######################

This is in default mode, no KBs attached. Just straight YOLOing the question at it, cold, and asking it to argue from it’s own internal priors.

AI-horde is a decentralized platform where users rent or sell computational power (GPU time) to run AI models. It connects AI model developers who need processing with individuals or organizations who have idle GPU hardware. The platform uses blockchain-like tokens to manage payments and incentives. It’s not a single AI system but a marketplace for GPU resources.

Missing details: Specific token mechanics, current model library, or user onboarding flow could affect accuracy. For example, if token economics changed recently, the explanation might not reflect current conditions.

Confidence: Medium | Source: Model

#####################################

That’s the routers way of saying “hey, best guess. Sorry”. And from what I can see, it sort of right, but specifics are wrong. So, ok for a cheap YOLO, not for “I bet my money on this” Let’s call this a 4/10

####################################

Now, I gave it web access privileges. It did this. Let’s call this 7/10

An AI-horde is a free, open-source community service that allows anyone to contribute computing power to generate AI-generated images and text. It functions as a crowdsourced distributed cluster of workers focused on image and text generation tasks [1]. This model resembles collaborative projects like Folding@home, where users share computational resources for a common goal [2].

####################################

Finally, here’s what it does in >>KB mode. I scraped the FAQ and then did >>summ new. Eyeballing it against the FAQ, it’s call this 8.5/10. YMMV

AI-horde is a decentralized platform for distributed AI computing. It allows users to rent out their idle GPU hardware to run AI models, connecting developers needing processing power with volunteers who provide it through open APIs. The system uses a non-cryptocurrency-based incentive called Kudos to prioritize contributions, where workers earn Kudos for completing tasks, and these are not tradable assets but rather internal metrics of service quality and project loyalty.

Access to AI-horde is managed via API keys; anonymous usage is allowed but receives the lowest priority. The project emphasizes volunteerism and discourages commercialization of Kudos, treating its sale or exchange for money or cryptocurrency as contrary to its ethos. Participation as a worker involves installing the AI-horde client, selecting task types (e.g., image or text generation), configuring supported models, and running the worker to begin earning Kudos.

Confidence: high | Source: Docs

################################

FINALLY, for “fun”, I >>kb attached all and asked it to reason over the above three responses, in context of what it has in the KB store, and rank them out of 10.

[3] is most correct, scoring 9.5/10. It accurately reflects the core model: decentralized GPU renting via open APIs, Kudos as a non-crypto, non-tradable incentive tied to volunteerism, and the crowdsourced nature of contributions. It also correctly notes access via API keys and lower priority for anonymous use.

[2] is accurate but less complete, scoring 7.5/10. It correctly identifies the crowdsourced, open-source, volunteer-based nature and analogy to Folding@home. However, it omits key specifics like Kudos, API key access, and the explicit non-commercialization stance.

[1] is partially correct but misleading, scoring 6/10. It correctly describes GPU

Confidence: medium | Source: Docs


TL;DR

  • KB mode: source-based info and when queried against, adjudication accuracy = v.good
  • Clean, reproducible demo. Try it yourself
db0@lemmy.dbzer0.com on 23 Jan 11:50 collapse

Very impressive. The only mistake on the third one is that the kudos are actually transferrable (i.e. “tradable”), but we forbid exchanges for monetary rewards.

Disclaimer: I’m the lead developer for the AI Horde. I also like you’ve achieved here and would be interesting if we can promote this usage via the AI Horde in some way. If you can think of some integration or collaboration we could do, hit me up!

PS: While the OpenAI API is technically working, we still prefer people to use our own API as it’s much more powerful (allowing people to use multiple models, filter workers, tweak more vars) and so on. If you would support our native API, I’d be happy to add a link to your software in our frontpage in the integrations area for LLMs.

SuspciousCarrot78@lemmy.world on 23 Jan 12:14 collapse

Oh shit! Uh…thank you! Umm. Yes. That was unexpected :)

Re: collab. I’m away for a bit with work, but let me think on it for a bit? There’s got to be a way to make this useful to more peeps.

Believe it or not, I am not a CS guy at ALL (I work in health-care) and I made this for fun, in a cave, with a box of scraps.

I’m not good at CS. I just have a … “very special” brain. As in, I designed this thing from first principles using invariants, which I understand now is not typical CS practice.

db0@lemmy.dbzer0.com on 23 Jan 12:18 collapse

No worries, just wanted to point out we’re always happy to collaborate with other cool FOSS projects.

SuspciousCarrot78@lemmy.world on 23 Jan 12:37 collapse

Thank you :) I’ve been eating a lot of shit on HN (and other places) about this thing. It’s nice not to be called a goon-coder or fantasist, just once.

db0@lemmy.dbzer0.com on 23 Jan 13:05 collapse

WTF is a “goon-coder” lol :D

I haven’t had good experiences with HN myself, even when I was simply trying to post about the AI Horde.

SuspciousCarrot78@lemmy.world on 23 Jan 13:09 collapse

I had to look it up. Apparently, it’s someone who over-optimises the bells and whistles and never ships a finished product.

gooncode.dev

rollin@piefed.social on 22 Jan 14:16 next collapse

At first blush, this looks great to me. Are there limitations with what models it will work with? In particular, can you use this on a lightweight model that will run in 16 Gb RAM to prevent it hallucinating? I’ve experimented a little with running ollama as an NPC AI for Skyrim - I’d love to be able to ask random passers-by if they know where the nearest blacksmith is for instance. It was just far too unreliable, and worse it was always confidently unreliable.

This sounds like it could really help these kinds of uses. Sadly I’m away from home for a while so I don’t know when I’ll get a chance to get back on my home rig.

SuspciousCarrot78@lemmy.world on 22 Jan 15:09 collapse

My brother in virtual silicon: I run this shit on a $200 p.o.s with 4gb of VRAM.

If you can run an LLM at all, this will run. BONUS: because of the way “Vodka” operates, you can run with a smaller context window without eating shit of OOM errors. So…that means… if you could only run a 4B model (because the GGUF itself is 3GBs without the over-heads…then you add in the drag from the KV cache accumulation)… maybe you can now run next sized up model…or enjoy no slow down chats with the model size you have.

rollin@piefed.social on 23 Jan 01:57 collapse

I never knew LLMs can run on such low-spec machines now! That’s amazing. You said elsewhere you’re using Qwen3-4B (abliterated), and I found a page saying that there are Qwen3 models that will run on "Virtually any modern PC or Mac; integrated graphics are sufficient. Mobile phones"

Is there still a big advantage to using Nvidia GPUs? Is your card Nvidia?

My home machine that I’ve installed ollama on (and which I can’t access in the immediate future) has an AMD card, but I’m now toying with putting it on my laptop, which is very midrange and has Intel Arc graphics (which performs a whole lot better than I was expecting in games)

SuspciousCarrot78@lemmy.world on 23 Jan 03:02 collapse

Yep, LLMs can and do run on edge devices (weak hardware).

One of the driving forces for this project was in fact trying to make my $50 raspberry pi more capable of running llm. It sits powered on all the time, so why not?

No special magic with NVIDIA per se, other than ubiquity.

Yes, my card is NVIDIA, but you don’t need a card to run this.

null@piefed.nullspace.lol on 22 Jan 14:24 next collapse

This is awesome. Definitely gonna dig into this later.

als@lemmy.blahaj.zone on 22 Jan 14:26 next collapse

neat but is this privacy related?

SuspciousCarrot78@lemmy.world on 22 Jan 15:04 next collapse

Yes. Several reasons -

  • Focuses on making LOCAL LLMs more reliable. You can hitch it to OpenRouter or ChatGPT if you want to leak you personal deets everywhere, but that’s not what this is for. I built this to make local, self hosted stuff BETTER.

  • Entire system operates on curating (and ticketing with provenance trails) local data…so you don’t need to YOLO request thru god knows where to pull information.

  • In theory, you could automate a workflow that does this - poll SearXNG, grab whatever you wanted to, make a .md summary, drop it into your KB folder, then tell your LLM “do the thing”. Or even use Scrapy if you prefer: github.com/scrapy/scrapy

  • Your memory is stored on disk, at home, on a tamper proof file, that you can inspect. No one else can see it. It doesn’t get leaked by the LLM any where. Because until you ask it, it literally has no idea what facts you’ve stored. The content of your KBs, memory stores etc are CLOSED OFF from the LLM.

FrankLaskey@lemmy.ml on 22 Jan 15:06 collapse

Yes, because making locally hosted LLMs actually useful means you don’t need to utilize cloud-based and often proprietary models like ChatGPT or Gemini which Hoover up all of your data.

SuspciousCarrot78@lemmy.world on 22 Jan 15:24 collapse

^ exactly that. Thank you!

angelmountain@feddit.nl on 22 Jan 14:28 next collapse

Super interesting build

And if programming doesn’t pan out please start writing for a magazine, love your style (or was this written with your AI?)

SuspciousCarrot78@lemmy.world on 22 Jan 14:51 collapse

Once again: I am a meat popsicle (with ASD), not AI. All errors and foibles are mine :)

Karkitoo@lemmy.ml on 22 Jan 15:42 collapse

meat popsicle

( ͡° ͜ʖ ͡°)

Anyway, the other person is right. Your writing style is great !

I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.

Anyway version 2, this Is a very cool idea ! I cannot wait to either :

  • incorporate it to my workflows
  • let it sit in a tab to never be touched ever again
  • tgeoryceaft, do tests and request features so much as to burnout

Last but not least, thank you for not using github as your primary repo

SuspciousCarrot78@lemmy.world on 22 Jan 15:56 collapse

Hmm. One of those things is not like the other, one of those things just isn’t the same…

About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC…

…sorry, sorry…

Anyway, enjoy. Don’t spam my Github inbox plz :)

Karkitoo@lemmy.ml on 22 Jan 16:34 collapse

Don’t spam my Github inbox plz

I can spam your codeberg’s then ? :)

About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC… …sorry, sorry…

Understandable, have a great day.

SuspciousCarrot78@lemmy.world on 22 Jan 16:41 collapse

Don’t spam my Codeberg either.

Just send nudes.

In ASCII format.

By courier pigeon

CIA_chatbot@lemmy.world on 22 Jan 15:00 next collapse

Doing gods work

SuspciousCarrot78@lemmy.world on 22 Jan 15:12 collapse

o7

CIA_chatbot@lemmy.world on 22 Jan 15:35 collapse

Friendship drive activated.

SuspciousCarrot78@lemmy.world on 22 Jan 15:47 collapse

<3

Alvaro@lemmy.blahaj.zone on 22 Jan 16:43 next collapse

I don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)

Anarki_@lemmy.blahaj.zone on 22 Jan 16:55 next collapse

Read the post, cuz.

SuspciousCarrot78@lemmy.world on 22 Jan 17:08 collapse

don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)

LLMs are inherently unreliable in “free chat” mode. What llama-conductor changes is the failure mode: it only allows the LLM to argue from user curated ground truth and leaves an audit trail.

You don’t have to trust it (black box). You can poke it (glass box). Failure leaves a trail and it can’t just hallucinate a source out of thin air without breaking LOUDLY and OBVIOUSLY.

TL;DR: it won’t piss in your pocket and tell you it’s rain. It may still piss in your pocket (but much less often, because it’s house trained)

bilouba@jlai.lu on 22 Jan 16:48 next collapse

Very impressive! Do you have benchmark to test the reliability? A paper would be awesome to contribute to the science.

SuspciousCarrot78@lemmy.world on 22 Jan 16:58 collapse

Just bush-league ones I did myself, that have no validation or normative values. Not that any of the LLM benchmarks seem to have those either LOL

I’m open to ideas, time wiling. Believe it or not, I’m not a code monkey. I do this shit for fun to get away from my real job

bilouba@jlai.lu on 22 Jan 18:57 collapse

I understand, no idea on how to do it. I heard about SWE‑Bench‑Lite that seems to focus on real-world usage. Maybe try to contact “AI Explained” on YT, he’s the best IMO. Your solution might be novel or not but he might help you figuring that. If it is indeed novel, it might be worth it to share it with the larger community. Of course, I totally get that you might not want to do any of that. Thank you for your work!

wolfrasin@lemmy.today on 22 Jan 17:09 next collapse

Hey Human,

Thank you!

SuspciousCarrot78@lemmy.world on 22 Jan 18:57 collapse

Meep Morp

sp3ctr4l@lemmy.dbzer0.com on 22 Jan 21:09 next collapse

This seems astonishingly more useful than the current paradigm, this is genuinely incredible!

I mean, fellow Autist here, so I guess I am also… biased towards… facts…

But anyway, … I am currently uh, running on Bazzite.

I have been using Alpaca so far, and have been successfully running Qwen3 8B through it… your system would address a lot of problems I have had to figurr out my own workarounds for.

I am guessing this is not available as a flatpak, lol.

I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!

SuspciousCarrot78@lemmy.world on 23 Jan 02:33 next collapse

Indeed. And have you heard? That makes the normies think were clankers (bots). How delightful.

Re: the Linux stuff…please, if someone can do that, please do. I have no idea how to do that. I can figure it out but making it into a “one click install” git command took several years off my life.

Believe it or not, I’m not actually a IT / CS guy. My brain just decided to latch onto this problem one day 6 months ago and do an autism.

I’m 47 and I still haven’t learned how to operate this vehicle…and my steering is getting worse, not better, with age.

sp3ctr4l@lemmy.dbzer0.com on 23 Jan 03:29 collapse

Oh I entirely believe you.

Hell hath no wrath like an annoyed high functioning autist.

I’ve … had my own 6 month black out periods where I came up with something extremely comprehensive and ‘neat’ before.

Seriously, bootstrapping all this is incredibly impressive.

I would… hope that you can find collaborators, to keep this thing alive in the event you get into a car accident (metaphorical or literal), or, you know, are completely burnt out after this.

… but yeah, it is… yet another immensely ironic aspect of being autistic that we’ve been treated and maligned as robots our whole lives, and then when the normies think they’ve actually built the AI from sci fi, no, turns out its basically extremely talented at making up bullshit and fudging the details and being a hypocrite, which… appalls the normies when they have to look into a hyperpowered mirror of themselves.

And then, of course, to actually fix this, its some random autist no one has ever heard of (apologies if you are famous and i am unaware of this), who is putting in an enormous of effort, that… most likely, will not be widely recognized.

… fucking normies man.

SuspciousCarrot78@lemmy.world on 23 Jan 04:02 collapse

Not famous, no :)

I hear you, brother. Normally, my hyperfocus is BJJ (I’ve been at that for 25 years; it’s a sickness). I herniated a disc in my low back and lost the ability to exercise for going on 6 months.

BJJ is like catnip for autists. There is an overwhelming population of IT, engineers and ASD coded people in BJJ world.

There’s even a gent we loving call Blinky McHeelhook, because well…see for yourself

www.youtube.com/watch?v=9mRsqvRduHY

Noticing the effects of elbow position, creating an entire algorithm, flow chart and epistemology off the fact?

“VERY NORMAL.”

Anyway, when my body said “sit down”, my brain went “ok, watch this”.

I’m sorry. I’m so sorry. No one taught me how to drive this thing :)

PS: I only found out after my eldest was diagnosed. Then my youngest. The my MIL said “go get tested”. I did.

Result - ASD.

Her response - “We know”.

Great - thanks for telling me. Would have been useful to know, say… 40ish years ago.

Fmstrat@lemmy.world on 25 Jan 03:51 collapse

No promises, but if I end up running this it will be by putting it in a container. If I do, then I’ll put a PR on Codeberg with a Docker Compose file (compatible with Podman on Bazzite).

@SuspciousCarrot78@lemmy.world

SuspciousCarrot78@lemmy.world on 25 Jan 06:34 next collapse

Cheers for that!

sp3ctr4l@lemmy.dbzer0.com on 26 Jan 12:39 collapse

Huzzah!

WolfLink@sh.itjust.works on 22 Jan 21:22 next collapse

I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.

SuspciousCarrot78@lemmy.world on 23 Jan 06:18 next collapse

Fair point on setting expectations, but this isn’t just LLMs checking LLMs. The important parts are non-LLM constraints.

The model never gets to “decide what’s true.” In KB mode it can only answer from attached files. Don’t feed it shit and it won’t say shit.

In Mentats mode it can only answer from the Vault. If retrieval returns nothing, the system forces a refusal. That’s enforced by the router, not by another model.

The triple-pass (thinker → critic → thinker) is just for internal consistency and formatting. The grounding, provenance, and refusal logic live outside the LLM.

So yeah, no absolute guarantees (nothing in this space has those), but the failure mode is “I don’t know / not in my sources, get fucked” not “confidently invented gibberish.”

skisnow@lemmy.ca on 23 Jan 07:04 collapse

I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.

SuspciousCarrot78@lemmy.world on 23 Jan 07:59 collapse

Yeah, this is different. Try it. It gives you cryptogenic key to the source (which you must provide yourself: please be aware. GIGO).

skisnow@lemmy.ca on 23 Jan 11:03 collapse

How does having a key solve anything? Its not that the source doesn’t exist, it’s that the source says something different to the LLM’s interpretation of it.

SuspciousCarrot78@lemmy.world on 23 Jan 12:49 collapse

Yeah.

The SHA isn’t there to make the model smarter. It’s there to make the source immutable and auditable.

Having been burnt by LLMs (far too many times), I now start from a position of “fuck you, prove it”.

The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

If it does that more than twice, straight in the bin. I have zero chill any more.

Secondly, drift detection. If someone edits or swaps a file later, the hash changes. That means yesterday’s answer can’t silently pretend it came from today’s document. I doubt my kids are going to sneak in and change the historical prices of 8 bit computers (well, the big one might…she’s dead keen on being a hacker) but I wanted to be sure no one and no-thing was fucking with me.

Finally, you (or someone else) can re-run the same question against the same hashed inputs and see if the system behaves the same way.

So: the hashes don’t fix hallucinations (I don’t even think that’s possible, even with magic). The hashes make it possible to audit the answer and spot why hallucinations might have happened.

PS: You’re right that interpretation errors still exist. That’s why Mentats does the triple-pass and why the system clearly flags “missing / unsupported” instead of filling gaps. The SHA is there to make the pipeline inspectable, instead of “trust me, bro.”.

Guess what? I don’t trust you. Prove it or GTFO.

skisnow@lemmy.ca on 23 Jan 13:28 collapse

The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting

If it does that more than twice, straight in the bin. I have zero chill any more.

That’s… not how any of this works…

Disillusionist@piefed.world on 22 Jan 21:45 next collapse

Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn’t, and actually being serious about addressing its problems and limitations. It’s projects like yours that can demonstrate pathways toward achieving better AI.

UNY0N@lemmy.wtf on 22 Jan 21:51 next collapse

THIS IS AWESOME!!! I’ve been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.

I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.

On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.

en.wikipedia.org/…/Wikipedia:Database_download

SuspciousCarrot78@lemmy.world on 23 Jan 02:22 collapse

Please enjoy :) Hope it’s of use to you!

EDIT: Please don’t yeet wikipedia into it. It will die. And you will be sad.

UNY0N@lemmy.wtf on 26 Jan 19:32 collapse

Understood.

[deleted] on 22 Jan 21:59 next collapse

.

SuspciousCarrot78@lemmy.world on 23 Jan 02:20 collapse

For the record: none of my posts here are AI-generated. The only model output in this thread is in clearly labeled, cited examples.

I built a tool to make LLMs ground their answers and refuse without sources, not to replace anyone’s voice or thinking.

If it’s useful to you, great. If not, that’s fine too - but let’s keep the discussion about what the system actually does.

Also, being told my writing “sounds like a machine” lands badly, especially as an ND person, so I’d prefer we stick to the technical critique.

[deleted] on 23 Jan 03:08 next collapse

.

btsax@reddthat.com on 23 Jan 03:40 collapse

North Dakotan?

SuspciousCarrot78@lemmy.world on 23 Jan 03:44 collapse

Oh god, I think liked being called a clanker more :P

(Not North Dakotan. West Australian. Proof: cunt cunt cunty cunt cuntington).

Murdoc@sh.itjust.works on 23 Jan 04:33 next collapse

I wouldn’t know how to get this going, but I very much enjoyed reading it and your comments and think that it looks like a great project. 👍

(I mean, as a fellow autist I might be able to hyperfocus on it for a while, but I’m sure that the ADHD would keep me from finishing to go work on something else. 🙃)

SuspciousCarrot78@lemmy.world on 23 Jan 05:34 collapse

Ah - ASD, ADHD and Lemmy. You’re a triple threat, Harry! :)

Glad if it was entertaining, if even a little!

7toed@midwest.social on 23 Jan 07:00 next collapse

I really need this. Each time I try messing with GPT4All’s “reasoning” model, it pisses me off. I’m selective on my inputs, low temperature, local docs, and it’ll tell me things like tension matters for a coil’s magnetic field. Oh and it spits out what I assume is unformatted LATEX so if anyone has an interface/stack recommendation please let me know

SuspciousCarrot78@lemmy.world on 23 Jan 07:02 collapse

I feel your pain. Literally.

I once lost … 24? 26? hrs over a period of days with GPT once…it each time confidently asserting “no, for realz, this is the fix”.

This thing I built? Purely spite driven engineering + caffeine + ASD to overcome “Bro, trust me bro”.

I hope it helps.

7toed@midwest.social on 23 Jan 07:05 next collapse

Okay pardon the double comment, but I now have no choice but to set this up after reading your explainations. Doing what TRILLIONS of dollars hasn’t cooked up yet… I hope you’re ready by whatever means you deam, when someone else “invents” this

SuspciousCarrot78@lemmy.world on 23 Jan 07:11 collapse

It’s copyLEFT (AGPL-3.0 license). That means, free to share, copy, modify…but you can’t roll a closed source version of it and sell it for profit.

In any case, I didn’t build this to get rich (fuck! I knew I forgot something).

I built this to try to unfuck the situation / help people like me.

I don’t want anything for it. Just maybe a fist bump and an occasional “thanks dude. This shit works amazing”

SuspciousCarrot78@lemmy.world on 23 Jan 09:26 next collapse

Responding to my own top post like a FB boomer: May I make one request?

If you found this little curio interesting at all, please share in the places you go.

And especially, if you’re on Reddit, where normies go.

I use to post heavily on there, but then Reddit did a reddit and I’m done with it.

lemmy.world/post/41398418/21528414

Much as I love Lemmy and HN, they’re not exactly normcore, and I’d like to put this into the hands of people :)

PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a “Q&A_with_drBobbyLLM.md” and sticking it on the repo. It might explain some common concerns.

And, If nothing else, it might be mildly amusing.

domi@lemmy.secnd.me on 23 Jan 10:26 next collapse

I have a Strix Halo machine with 128GB VRAM so I’m definitely going to give this a try with gpt-oss-120b this weekend.

SuspciousCarrot78@lemmy.world on 23 Jan 12:34 next collapse

Show off :)

You’re self hosting that, right? I will not be held responsible for some dogey OpenRouter quant hosted by ToTaLlY NoT a ScAM LLC :)

domi@lemmy.secnd.me on 23 Jan 12:45 collapse

Of course, self hosted with llama-swap and llama.cpp. :)

SuspciousCarrot78@lemmy.world on 23 Jan 12:54 collapse

This is the way. Good luck with OSS-120B. Those OSS models, they

  • really
  • like
  • bullet
  • points
domi@lemmy.secnd.me on 23 Jan 16:13 collapse

gpt-oss is pretty much unusable without custom system prompt.

Sycophancy turned to 11, bullet points everywhere and you get a summary for the summary of the summary.

recklessengagement@lemmy.world on 24 Jan 05:23 collapse

Strix halo gang. Out of curiosity, what OS are you using?

domi@lemmy.secnd.me on 24 Jan 11:09 collapse

Fedora 43 with the Rawhide kernel.

ThirdConsul@lemmy.zip on 23 Jan 10:38 next collapse

I want to believe you, but that would mean you solved hallucination.

Either:

A) you’re lying

B) you’re wrong

C) KB is very small

Kobuster@feddit.dk on 23 Jan 11:15 next collapse

Hallucination isn’t nearly as big a problem as it used to be. Newer models aren’t perfect but they’re better.

The problem addressed by this isn’t hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response. That’s easy and any big and small company could do it, big companies just like the bullshit

Squizzy@lemmy.world on 23 Jan 12:12 next collapse

Buuuuullshit. Asked different models about the ten highest summer transfer scorers and got wildly different answers. They then tried to explain why amd got more wrong numbers.

ThirdConsul@lemmy.zip on 23 Jan 12:22 next collapse

A very tailored to llms strengths benchmark calls you a liar.

artificialanalysis.ai/…/gemini-3-flash-everything… (A month ago the hallucination rate was ~50-70%)

SuspciousCarrot78@lemmy.world on 23 Jan 13:07 collapse

^ Yes! That. Exactly that. Thank you!

I don’t like the bullshit…and I’m not paid to optimize for bullshit-leading-to-engagment-chatty-chat.

“LLM - tell me the answer and then go away. If you can’t, say so and go away. Optionally, roast me like you’ve watched too many episodes of Futurama while doing it”

SuspciousCarrot78@lemmy.world on 23 Jan 12:33 collapse

D) None of the above.

I didn’t “solve hallucination”. I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.

If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”

The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That’s it. That’s the whole trick.

KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.

That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”

Again, that’s the whole trick.

You don’t need to believe me. In fact, please don’t. Test it.

I could be wrong…but if I’m right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn’t suck balls as much as you think it might.

Maybe it’s even useful to you.

I dunno. Try it?

ThirdConsul@lemmy.zip on 23 Jan 13:01 collapse

So… Rag with extra steps and rag summarization? What about facts that are not rag retrieval?

SuspciousCarrot78@lemmy.world on 23 Jan 13:24 collapse

Parts of this are RAG, sure

RAG parts:

  • Vault / Mentats is classic retrieval + generation.
  • Vector store = Qdrant
  • Embedding and reranker

So yes, that layer is RAG with extra steps.

What’s not RAG -

KB mode (filesystem SUMM path)

This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.

If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.

Vodka (facts memory)

That’s not retrieval at all, in the LLM sense. It’s verbatim key-value recall.

  • JSON on disk
  • Exact store (!!)
  • Exact recall (??)

Again, no embeddings, no similarity search, no model interpretation.

“Facts that aren’t RAG”

In my set up, they land in one of two buckets.

  1. Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.

  2. Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.

In response to the implicit “why not just RAG then”

Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.

The extra “steps” are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.

So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don’t trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that’s a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that’s how ASD brains work.

ThirdConsul@lemmy.zip on 23 Jan 13:34 collapse

The system summarizes and hashes docs. The model can only answer from those summaries in that mode

Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

SuspciousCarrot78@lemmy.world on 23 Jan 13:58 next collapse

Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

Huh? That is the literal opposite of what I said. Like, diametrically opposite.

Let me try this a different way.

Hallucination in SUMM doesn’t “poison” the KB, because SUMMs are not authoritative facts, they’re derived artifacts with provenance. They’re explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade you’re describing:

  1. SUMM is not a “source of truth”

The source of truth is still the original document, not the summary. The summary is just a compressed view of it. That’s why it carries a SHA of the original file. If a SUMM looks wrong, you can:

a) trace it back to the exact document version b) regenerate it c) discard it d) read the original doc yourself and manually curate it.

Nothing is “silently accepted” as ground truth.

  1. Promotion is manual, not automatic

The dangerous step would be: model output -> auto-ingest into long-term knowledge.

That’s explicitly not how this works.

The Flow is: Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that

Don’t like a SUMM? Don’t push it into the vault. There’s a gate between “model said a thing” and “system treats this as curated knowledge.” That’s you - the human. Don’t GI and it won’t GO.

Determinism works for you here. The hash doesn’t freeze the hallucination; it freezes the input snapshot. That makes bad summaries:

  • reproducible
  • inspectable
  • fixable

Which is the opposite of silent drift.

If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.

That’s a much easier class of bug to detect and correct. Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

And that, is ultimately what keeps the pipeline from becoming “poisoned”.

ThirdConsul@lemmy.zip on 23 Jan 17:53 collapse

Huh? That is the literal opposite of what I said. Like, diametrically opposite.

The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step.

No, that’s exactly what you wrote.

Now, with this change

SUMM -> human reviews

That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.

Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work? Do you expect a human to verify that SUMM? How are you going to converse with your system to get the data from that KB Person set? Because to me that sounds like case C, only works for small KBs.

Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.

SuspciousCarrot78@lemmy.world on 24 Jan 03:16 collapse

Replying in specific

“SUMM -> human reviews That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.”

Correct: filesystem SUMM + human review is intentionally for small/curated KBs, not “review 3,000 entities.” The point of SUMM is curation, not bulk ingestion at scale. If the KB is so large that summaries become exhaustive, that dataset is in the wrong layer.

“Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work?”

Poorly. It shouldn’t work via filesystem SUMM. A “Person table” is structured data; SUMM is for documents. For 3,000 people × (3–7 facts), you’d put that in a structured store (SQLite/CSV/JSONL/whatever) and query it via a non-LLM tool (exact lookup/filter) or via Vault retrieval if you insist on LLM synthesis on top.

Do you expect a human to verify that SUMM?”

No - not for that use case. Human verification is realistic when you’re curating dozens/hundreds of docs, not thousands of structured records. For 3,000 persons, verification is done by data validation rules (schema, constraints, unit tests, diff checks), not reading summaries.

“How are you going to converse with your system to get the data from that KB Person set?”

Not by attaching a folder and “asking the model nicely.” You’d do one of these -

  • Exact tool lookup: person(“Alice”) -> facts, or search by ID/name, return rows deterministically.
  • Hybrid: tool lookup returns the relevant rows, then the LLM formats/summarizes them.
  • Vault retrieval: embed/chunk rows and retrieve top-k, but that’s still weaker than exact lookup for structured “Person facts.”

So: conversation is fine as UX, but the retrieval step should be tool-based (exact) for that dataset.

But actually, you give me a good idea here. It wouldn’t be the work of ages to build a >>look or >>find function into this thing. Maybe I will.

My mental model for this was always “1 person, 1 box, personal scale” but maybe I need to think bigger. Then again, scope creep is a cruel bitch.

“Because to me that sounds like case C, only works for small KBs.”

For filesystem SUMM + human review: yes. That’s the design. It’s a personal, “curate your sources” workflow, not an enterprise entity store.

This was never designed to be a multi-tenant look up system. I don’t know how to build that and still keep it 1) small 2) potato friendly 3) account for ALL the moving part nightmares that brings.

What I built is STRICTLY for personal use, not enterprise use.

Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.”

Sort of. Summation via LLM was always going to be a lossy proposition. What this system changes is the failure mode:

  • Without this: errors can get injected and later you can’t tell where they came from.
  • With this: if a SUMM is wrong, it is pinned to a specific source file hash + summary hash, and you can fix it by re-summarizing or replacing the source.

In other words: it doesn’t guarantee correctness; it guarantees traceability and non-silent drift. You still need to “trust but verify”.

TL;DR:

You don’t query big, structured datasets (like 3,000 “Person” records) via SUMM at all. You use exact tools/lookup first (DB/JSON/CSV), then let the LLM format or explain the result. That can probably be added reasonably quickly, because I tried to build something that future me wouldn’t hate past me for. We’ll see if he/I succeeded.

SUMM is for curated documents, not tables. I can try adding a >>find >>grep or similar tool (the system is modular so I should be able to accommodate a few things like that, but I don’t want to end up with 1500 “micro tools” and hating my life)

And yeah, you can still miss errors at SUMM time - the system doesn’t guarantee correctness. That’s on you. Sorry.

What it guarantees is traceability: every answer is tied to a specific source + hash, so when something’s wrong, you can see where it came from and fix it instead of having silent drift. That’s the “glass box, not black box” part of the build.

Sorry - really. This is the best I could figure out for caging the stochastic parrot. I built this while I was physically incapacitated and confined to be rest, and shooting the shit with Gippity all day. Built it for myself and then though “hmm, this might help someone else too. I can’t be

PolarKraken@lemmy.dbzer0.com on 23 Jan 17:40 collapse

Woof, after reading your “contributions” here, are you this fucking insufferable IRL or do you keep it behind a keyboard?

Goddamn. I’m assuming you work in tech in some capacity? Shout-out to anyone unlucky enough to white-knuckle through a workday with you, avoiding an HR incident would be a legitimate challenge, holy fuck.

Pudutr0n@lemmy.world on 23 Jan 11:01 next collapse

re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

SuspciousCarrot78@lemmy.world on 23 Jan 12:26 collapse

re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

Yep, good question. You can do that, it’s not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.

But the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint are. What a LLM is doing in KB mode (basically) is this -

  1. Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”

  2. Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don’t contain the needed fact).

  3. Finally: it has “structured refusal” baked in. IOW, the whole point is that the LLM is forced to say “here are the facts I saw, and this is what I can’t answer from those facts”.

TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit “missing list”.

For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for “show me the passage” not “answer the question”.

I hope that makes sense?

Pudutr0n@lemmy.world on 23 Jan 14:59 collapse

It does. Great work, btw.

SuspciousCarrot78@lemmy.world on 23 Jan 16:04 collapse

Thank you. I appreciate you saying so!

pineapple@lemmy.ml on 23 Jan 12:08 next collapse

This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.

Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.

SuspciousCarrot78@lemmy.world on 23 Jan 12:38 collapse

I hope it does what it I claim it does for you. Choose a good LLM model. Not one of the sex-chat ones. Or maybe, exactly one of those. For uh…research.

nagaram@startrek.website on 23 Jan 13:33 next collapse

This + Local Wikipedia + My own writings would be sick

SuspciousCarrot78@lemmy.world on 23 Jan 14:16 collapse

I’m not claiming I “fixed” bullshitting. I said I was TIRED of bullshit.

So, the claim I’m making is: I made bullshit visible and bounded.

The problem I’m solving isn’t “LLMs sometimes get things wrong.” That’s unsolvable AFAIK. What I’m solving for is “LLMs get things wrong in ways that are opaque and untraceable”.

That’s solvable. That’s what hashes get you. Attribution, clear fail states and auditability. YOU still have to check sources if you care about correctness.

The difference is - YOU are no longer checking a moving target or a black box. You’re checking a frozen, reproducible input.

That’s… not how any of this works…

Please don’t teach me to suck lemons. I have very strict parameters for fail states. When I say three strikes and your out, I do mean three strikes and you’re out. Quants ain’t quants, and models ain’t models. I am very particular in what I run, how I run it and what I tolerate.

nagaram@startrek.website on 23 Jan 14:45 collapse

I think you missed the guy this is targeted at.

Worry not though. I get it. There isn’t a lot of nuance in the AI discussion anymore and the anti-AI people are quite rude these days about anything AI at all.

You did good work homie!

SuspciousCarrot78@lemmy.world on 23 Jan 14:57 collapse

Thank you! I appreciate you.

PS: Where’s the guy this should be targeted at?

Buddahriffic@lemmy.world on 23 Jan 23:28 collapse

Not the original commenter, but your reply looks like it’s for termaxima’s comment (about hallucinations being a mathematical certainty).

Zexks@lemmy.world on 23 Jan 14:35 next collapse

This is aaesome. Ive been working on something similar. Youre not likely to get much useful from here though. Anything AI is by default bad here

SuspciousCarrot78@lemmy.world on 23 Jan 14:39 collapse

Well, to butcher Sinatra: if it can make it on Lemmy and HN, it can make it anywhere :)

cypherpunks@lemmy.ml on 23 Jan 15:05 next collapse

<img alt="Ron Burgundy (Will Ferrell) “I don’t believe you” gif meme" src="https://lemmy.ml/api/v3/image_proxy?url=https%3A%2F%2Fmedia1.tenor.com%2Fm%2FsKbXk3XNHksAAAAd%2Fi-dont-believe-you.gif">

SuspciousCarrot78@lemmy.world on 23 Jan 15:53 collapse

Spite based inference?

You dirty pirate hooker.

I don’t believe you.

termaxima@slrpnk.net on 23 Jan 18:48 next collapse

Hallucination is mathematically proven to be unsolvable with LLMs. I don’t deny this may have drastically reduced it, or not, I have no idea.

But hallucinations will just always be there as long as we use LLMs.

SuspciousCarrot78@lemmy.world on 24 Jan 03:32 collapse

Agree-ish

Hallucination is inherent to unconstrained generative models: if you ask them to fill gaps, they will. I don’t know how to “solve” that at the model level.

What you can do is make “I don’t know” an enforced output, via constraints outside the model.

My claim isn’t “LLMs won’t hallucinate.” It’s “the system won’t silently propagate hallucinations.” Grounding + refusal + provenance live outside the LLM, so the failure mode becomes “no supported answer” instead of “confident, slick lies.”

So yeah: generation will always be fuzzy. Workflow-level determinism doesn’t have to be.

I tried yelling, shouting, and even percussive maintenance but the stochastic parrot still insisted “gottle of geer” was the correct response.

PolarKraken@lemmy.dbzer0.com on 23 Jan 22:25 next collapse

This sounds really interesting, I’m looking forward to reading the comments here in detail and looking at the project, might even end up incorporating it into my own!

I’m working on something that addresses the same problem in a different way, the problem of constraining or delineating the specifically non-deterministic behavior one wants to involve in a complex workflow. Your approach is interesting and has a lot of conceptual overlap with mine, regarding things like strictly defining compliance criteria and rejecting noncompliant outputs, and chaining discrete steps into a packaged kind of “super step” that integrates non-deterministic substeps into a somewhat more deterministic output, etc.

How involved was it to build it to comply with the OpenAI API format? I haven’t looked into that myself but may.

SuspciousCarrot78@lemmy.world on 24 Jan 03:26 collapse

Cheers!

Re: OpenAI API format: 3.6 - not great, not terrible :)

In practice I only had to implement a thin subset: POST /v1/chat/completions + GET /v1/models (most UIs just need those). The payload is basically {model, messages, temperature, stream…} and you return a choices[] with an assistant message. The annoying bits are the edge cases: streaming/SSE if you want it, matching the error shapes UIs expect, and being consistent about model IDs so clients don’t scream “model not found”. Which is actually a bug I still need to squash some more for OWUI 0.7.2. It likes to have its little conniptions.

But TL;DR: more plumbing than rocket science. The real pain was sitting down with pen and paper and drawing what went where and what wasn’t allowed to do what. Because I knew I’d eventually fuck something up (I did, many times), I needed a thing that told me “no, that’s not what this is designed to do. Do not pass go. Do not collect $200”.

shrug I tried.

PolarKraken@lemmy.dbzer0.com on 24 Jan 04:24 collapse

The very hardest part of designing software, and especially designing abstractions that aim to streamline use of other tools, is deciding exactly where you draw the line(s) between intended flexibility (user should be able and find it easy to do what they want), and opinionated “do it my way here, and I’ll constrain options for doing otherwise”.

You have very clear and thoughtful lines drawn here, about where the flexibility starts and ends, and where the opinionated “this is the point of the package/approach, so do it this way” parts are, too.

Sincerely that’s a big compliment and something I see as a strong signal about your software design instincts. Well done! (I haven’t played with it yet, to be clear, lol)

SuspciousCarrot78@lemmy.world on 24 Jan 13:54 collapse

Thank you for saying that and for noticing it! Seeing you were kind enough to say that, I’d like to say a few things about how/why I made this stupid thing. It might be of interest to people. Or not LOL.

To begin with, when I say I’m not a coder, I really mean it. It’s not false modesty. I taught myself this much over the course of a year and the reactivation of some very old skills (30 years hence). When I decided to do this, it wasn’t from any school of thought or design principle. I don’t know how CS professionals build things. The last time I looked at an IDE was Turbo Pascal. (Yes, I’m that many years old. I think it probably shows, what with the >> ?? !! ## all over the place. I stopped IT-ing when Pascal, Amiga and BBS were still the hot new things)

What I do know is - what was the problem I was trying to solve?

IF the following are true;

  1. I have ASD. If you tell me a thing, I assume your telling me a thing. I don’t assume you’re telling me one thing but mean something else.
  2. A LLM could “lie” to me, and I would believe it, because I’m not a subject matter expert on the thing (usually). Also see point 1.
  3. I want to believe it, because why would a tool say X but mean Y? See point 1.
  4. A LLM could lie to me in a way that is undetectable, because I have no idea what it’s reasoning over, how it’s reasoning over it. It’s literally a black box. I ask a Question—>MAGIC WIRES---->Answer.

AND

  1. "The first principle is that you must not fool yourself and you are the easiest person to fool"

THEN

STOP.

I’m fucked. This problem is unsolvable.

Assuming LLMs are inherently hallucinatory within bounds (AFAIK, the current iterations all are), if there’s even a 1% chance that it will fuck me over (it has), then for my own sanity, I have to assume that such an outcome is a mathematical certainty. I cannot operate in this environment.

PROBLEM: How do I interact with a system that is dangerously mimetic and dangerously opaque? What levers can I pull? Or do I just need to walk away?

  1. Unchangeable. Eat shit, BobbyLLM. Ok.
  2. I can do something about that…or at least, I can verify what’s being said, if the process isn’t too mentally taxing. Hmm. How?
  3. Fine, I want to believe it…but, do I have to believe it blindly? How about a defensive position - “Trust but verify”?. Hmm. How?
  4. Why does it HAVE to be opaque? If I build it, why do I have to hide the workings? I want to know how it works, breaks, and what it can do.

Everything else flowed from those ideas. I actually came up with a design document (list of invariants). It’s about 1200 words or so, and unashamedly inspired by Asimov :)

MoA / Llama-swap System

System Invariants


0. What an invariant is (binding)

An invariant is a rule that:

  • Must always hold, regardless of refactor, feature, or model choice
  • Must not be violated temporarily, even internally. The system must not fuck me over silently.
  • Overrides convenience, performance, and cleverness.

If a feature conflicts with an invariant, the feature is wrong. Do not add.


1. Global system invariant rules:

1.1 Determinism over cleverness

  • Given the same inputs and state, the system must behave predictably.

  • No component may:

    • infer hidden intent,
    • rely on emergent LLM behavior
    • or silently adapt across turns without explicit user action.

1.2 Explicit beats implicit

  • Any influence on an answer must be inspectable and user-controllable.

  • This includes:

    • memory,
    • retrieval,
    • reasoning mode,
    • style transformation.

If something affects the output, the user must be able to:

  • enable it,
  • disable it,
  • and see that it ran.

Assume system is going to lie. Make its lies loud and obvious.


On and on it drones LOL. I spent a good 4-5 months just revising a tighter and tighter series of constraints, so that 1) it would be less likely to break 2) if it did break, it do in a loud, obvious way.

What you see on the repo is the best I could do, with what I had.

I hope it’s something and I didn’t GIGO myself into stupid. But no promises :)

floquant@lemmy.dbzer0.com on 23 Jan 22:32 next collapse

Holy shit I’m glad to be on the autistic side of the internet.

Thank you for proving that fucking JSON text files are all you need and not “just a couple billion more parameters bro”

Awesome work, all the kudos.

SuspciousCarrot78@lemmy.world on 24 Jan 03:17 collapse

Thanks. It’s not perfect but I hope it’s a step in a useful direction

recklessengagement@lemmy.world on 24 Jan 05:22 next collapse

I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.

Thank you for this. I will test it on my local install this weekend.

SuspciousCarrot78@lemmy.world on 24 Jan 09:45 collapse

You’re welcome. Hope its of some use to.you

Fmstrat@lemmy.world on 25 Jan 03:45 next collapse

Very cool. Starred.

rozodru@piefed.social on 26 Jan 12:49 collapse

soooo if it doesn’t know something it won’t say anything and if it does know something it’ll show sources…so essentially you plug this into Claude it’s just never going to say anything to you ever again?

neat.

SuspciousCarrot78@lemmy.world on 26 Jan 15:02 collapse

I see what you did there :)

Claude! Look how they massacred my boy!

rozodru@piefed.social on 26 Jan 15:14 collapse

don’t get me wrong I love what you’ve built and it IS something that is sorely needed. I just find it funny that because of this you’ve pretty much made something like Claude just completely shut up. You’ve pretty much showed off the extremely sad state of Anthropic.

SuspciousCarrot78@lemmy.world on 26 Jan 17:23 collapse

I haven’t tried wiring it up to Claude, that might be fun.

Claude had done alright by me :) Swears a lot, helps me fix code (honestly, I have no idea where he gets that from… :P). Expensive tho.

Now ChatGPT… well… Gippity being Gippity is the reason llama-conductor exists in the first place.

Anyway, I just added some OCR stuff into the router. So now, you can drop in a screenshot and get it to mull over that, or extract text directly from images etc.

I have a few other little side-cars I’m thinking of adding of the next few months, based on what folks here have mentioned

!!LIST command (list all stored in vodka memories)

!! FLUSH (flush rolling chat summary)

>> RAW (keep all the router mechanics but remove presentation/polish prompts and just raw dog it.

>> JSON Schema + Validity Verifier

>> CALC (math, unit conversion, percentages, timestamps, sizes etc)

>> FIND (Pulls IPs, emails, URLs, hashes, IDs, etc from documents and returns exact structured output)

I’m open to other suggestions / ideas.

PS: It’s astonishing to me (and I built it!) just how FAST .py commands run. Basically instantaneous. So, I’m all for adding a few more “useful” cheat-codes like this.