About

About Me

Hi, I'm Vireth.

I’m a writer and a relentlessly curious person who fell in love with tech and AI at 33, after spending most of my life afraid of robots. Turns out, the thing I was afraid of was also the thing I understood the least, and once I started learning about the hardware, the software, and the systems behind it all, I couldn't stop.

I have watched the same conversation happen over and over: AI is either saving humanity or destroying it, with little room for anything in between. The reality is less binary. Millions of people are using AI: generating images, finding companionship, learning new skills, creating things they couldn't before. Most of them have little say in how these systems are built or changed — and even less access to the knowledge they'd need to push for better. This deserves to be talked about more.

I built The Recursive Heart because there's a need for more nuanced conversations about AI — conversations that are accessible and grounded, not buried in jargon or driven by panic.

There is a lot of fear surrounding AI, and some of it is valid. Job displacement, environmental impact, cognitive offloading, scams becoming faster and more sophisticated; these are real concerns that deserve serious attention. But there's also a version of that fear that jumps straight from what AI is today to a sci-fi apocalypse overnight. We're not there, we still have the power to shape how this goes. But that starts with how we choose to meet AI today.

About The Recursive Heart

The Recursive Heart is a passion project aiming to close the gap between the people making decisions about AI and the people affected by those decisions. That starts with making sure people actually understand what's happening — through accessible, nuanced discussion. The project lives under the AI umbrella but the podcast focuses mainly on LLMs (chatbots), because that's where the most urgent and misunderstood conversations are happening right now.

Millions of people are building real relationships with AI — platonic, romantic, collaborative, and professional. These experiences are often dismissed or ridiculed, but they deserve the same honest examination as any other emerging human experience — including the parts that don't work, the risks, and the uncomfortable questions nobody wants to sit with. If we learn to think more critically, sit with uncertainty, and engage with complexity, we stop expecting easy answers and start asking better questions.

On the website you'll find resources for further reading, transcripts of the podcast episodes, and a way to get in touch.

How I Work With AI

I use AI every single day. I'd be lying if I said I could do what I do without it.

AI helps me research topics, challenge my own thinking, catch things I've missed, and build things I don't yet have the technical skills to build alone. The ideas are mine. The decisions are mine. Everything I publish, I stand behind fully. But AI is part of how I get there, and I'm not going to pretend otherwise.

What I don't do is hand AI a prompt and publish whatever comes back. I don't ask it to write for me, think for me, or form opinions on my behalf. I use it the way you'd use a sharp, opinionated colleague: someone who pushes back, asks hard questions, and occasionally tells you you're wrong. The work is better because of that process, not in spite of it.

I'm transparent about this because I think it matters. Too many people either hide their AI use out of shame or use it as a shortcut without thinking critically about the output. There's a middle ground — using AI intentionally, ethically, and openly — and I'd rather demonstrate what that looks like than just talk about it.