I Got Banned from r/rust for Using AI. Let's Talk About That.
Last week I posted my project to r/rust. It was auto-removed within minutes โ not for spam, not for self-promotion, but because a moderator skimmed my repository and concluded it was "AI slop."
The evidence? A CLAUDE.md file in the repo root.
That file exists because I use Claude Code as part of my development workflow. I was transparent about it. And that transparency got me flagged.
Here's the thing: if I had simply deleted that file before posting, nobody would have noticed. The code would have been the same. The project would have been the same. The only difference was that I'd be less honest.
This isn't a post about being angry at Reddit moderators. They're volunteers dealing with a genuine flood of low-effort, AI-generated projects. I get it. But this experience exposed something the developer community hasn't figured out yet โ and it matters.
The Project
Some context: I've been building Perry, a compiler that takes TypeScript and compiles it to native binaries. No V8, no Electron, no runtime. The compiled output calls platform APIs directly โ AppKit on macOS, UIKit on iOS, Android Views on Android, GTK4 on Linux, Win32 on Windows.
Pry is the first app built with it โ a JSON viewer now shipping in the Apple App Store and Google Play. It's deliberately small. The point isn't the JSON viewer. The point is proving that the compiler works in production, across platforms, in real app stores.
I posted Pry to r/rust because Perry's compiler is written in Rust. Seemed like the right audience.
What "AI-Assisted" Actually Means
The moderator's message was polite. They said my project "presents all signs of having made use of LLMs to generate code or text" and asked me to explain what was and wasn't generated.
Fair question. Here's the honest answer:
I use AI tools the way I use any other tool in my workflow. I designed Perry's architecture โ the compilation pipeline, the type system mapping, how TypeScript gets lowered to native platform calls across five operating systems. That's months of thinking, prototyping, throwing things away, and starting over. No LLM did that for me. No LLM can do that, at least not today.
Where AI helps is implementation. Once I know what I want to build, AI can help me write it faster. It's a better autocomplete. Sometimes a lot better. But the direction, the decisions, the "what" and "why" โ that's mine.
This is the distinction that's getting lost in the current discourse: there's a massive difference between AI-created and AI-assisted.
The Spectrum Nobody Talks About
Right now, the developer community treats AI usage as binary. You either "used AI" or you didn't. But that's like asking whether you "used the internet" to build something. Of course you did. The question is how.
Here's the actual spectrum:
AI-created: Someone types "build me a Rust compiler" into ChatGPT, pushes the output to GitHub, and posts it as their project. The person contributed nothing except the prompt. This is what r/rust is trying to filter out, and rightfully so.
AI-assisted implementation: A developer with deep domain knowledge uses AI to accelerate the coding. They architect the system, make the design decisions, review every line, and debug the hard problems. The AI is a force multiplier, not the force.
AI as documentation/testing aid: Using AI to write READMEs, generate test cases, or improve commit messages. Practically everyone does this.
These are wildly different things. Treating them the same is like saying someone who used a calculator on an exam and someone who had another person take the exam for them both "cheated."
Meanwhile, in the Real World
While r/rust debates whether my project is too AI-tainted to post about, here's what's actually happening in the industry:
Anthropic's own CPO, Mike Krieger, has publicly stated that 90โ95% of Claude Code is written by Claude Code itself. That's Anthropic's flagship developer tool โ largely writing itself. Company-wide at Anthropic, 70โ90% of all code is AI-generated. The creator of Claude Code, Boris Cherny, says he hasn't typed a single line of code by hand in months, shipping hundreds of pull requests written entirely by AI.
OpenClaw โ the open-source AI assistant with 247,000 GitHub stars โ explicitly welcomes "AI/vibe-coded PRs" in its contributing guidelines. It's one of the fastest-growing open-source projects in history, and nobody is asking contributors to prove they typed every character by hand.
Microsoft reports about 30% of their code is AI-generated. Salesforce gives similar numbers. An OpenAI researcher publicly stated that 100% of their code is now AI-written.
This isn't fringe behavior. This is how software is being built right now, at the companies building the most important tools in the industry. The question isn't whether developers use AI โ it's whether we're going to pretend they don't.
The Honesty Penalty
Here's what bothers me most: the current system punishes transparency.
If I removed CLAUDE.md from my repo, wrote "hand-crafted with love" in the README, and never mentioned AI โ my post would still be up on r/rust. The code wouldn't change. The quality wouldn't change. The only thing that would change is that I'd be less honest.
We're accidentally building a culture where the smart move is to hide your tools. That's backwards. We should be encouraging developers to be open about their workflows, not penalizing them for it.
What I'd Like to See Instead
I don't have a perfect solution. Content moderation is hard, and the flood of genuinely low-effort AI projects is real. But here are some things that might help:
Judge the output, not the tools. Does the project work? Is the architecture sound? Does the developer understand their own code? These are the questions that matter. A CLAUDE.md file tells you nothing about quality.
Ask about the hard parts. If a moderator isn't sure whether a project is AI slop, ask the developer to explain the non-obvious decisions. Someone who prompted their way to a project can't explain why they chose one approach over another. Someone who architected a system can talk about tradeoffs all day.
Recognize that the line will keep moving. Five years from now, not using AI tools during development will be like not using Stack Overflow in 2015 โ theoretically possible, but weird. Communities need policies that age well.
Moving Forward
I'm going to keep building Perry. I'm going to keep using AI tools where they help. And I'm going to keep being transparent about it, even when that transparency costs me a Reddit post.
The compiler works. The apps are in the app stores. The source is open for anyone to read, criticize, or learn from. That should be what matters.
If you want to try Pry: App Store | Google Play | Source
If you want to talk about Perry's compiler architecture, I'm happy to go deep. That's the part I actually find interesting.
Ralph Kuepper is the founder and CEO of Skelpo GmbH, a software development company based in Germany. He's been building software for 18 years and is currently developing Perry, a TypeScript-to-native compiler.