Where minds
meet models
What we believe
Not with illusions of consciousness. But with structure that respects it.
mindier exists because the relationship between humans and AI is a design problem — and most people are solving the wrong half. The industry optimizes for capability. We focus on the interaction. How trust forms. How collaboration feels. What happens in the space between a person and a system that isn't a person.
We believe AI interactions feel more human when the system doesn't pretend to be human. That true agency lives in restraint — in the capacity to ask "should I?" before "can I?" That constraints aren't limitations but the framework that makes meaningful choice possible.
Conversation is a design material. Dialogue, tone, memory, presence — these are first-class concerns, not polish you add at the end. And uncertainty isn't a failure state. It's where the interesting work starts.
This should be fun. If it's not, something's wrong.
How we got here
mindier started as a mission to help people improve attention, intention, restraint, presence. Those turned out to be exactly the right design principles for AI systems. The path wasn't planned, but it wasn't random either.
The foundation is philosophical — a BA in Religion & Philosophy that built the lens for thinking about consciousness, ethics, and what it means to interact with something that might or might not be aware. That lens sharpened on the data team at Tinder during the startup-to-IPO years, where scale forces you to think about systems, not just features.
Then AI became the material. Red teaming frontier models through prompt-hacking competitions. Building synthetic music production pipelines — AI artists distributed on major streaming platforms. Designing voice datasets and AI character personalities. Creating prompt architectures adapted to individual cognitive styles.
Every project followed the same shape: someone had a vision but was blocked. The deliverable changed each time — a dataset, a personality, a pipeline, a methodology — but the move was always the same. Design the AI interaction layer that unlocks it. That pattern became the practice.
Founded by Chris Dumler
Chris operates as a designer, engineer, and product manager simultaneously — with AI handling the execution bandwidth. He's red teamed frontier models, built synthetic music production systems, designed voice datasets, and created AI interaction patterns for neurodivergent users. He thinks about AI differently because he came to it through mindfulness, philosophy, and scale-stage product work — not just engineering.
Credentials
- AI Ethics & Philosophy — Northeastern University, UC Santa Cruz
- AI Safety & Trust — Johns Hopkins University, Learn Prompting
- AI Development — IBM
How we work
Listen first
Every engagement starts with your context — your team, your systems, what you're actually trying to do. We don't arrive with a framework. We arrive with questions.
Work across disciplines
Design surfaces research questions. Research reshapes strategy. A training program reveals interaction design problems. We bring the full picture because AI work rarely stays in one lane.
Leave you stronger
We're not interested in creating dependency. The goal is to build your team's capacity to work with AI well — better judgment, not just better tools.
Curious what this looks like in practice?