MyBrain: Are You a Human or a Robot?

You are reading something online. It is well-written, coherent, and addresses your question. You find it helpful. Then you wonder: was this written by a human or generated by AI? And then the more interesting question: does it matter?

The Blurring Line

Large language models can now produce text that is indistinguishable from human writing in most casual contexts. Customer service interactions, product descriptions, news summaries, social media posts, and even personal emails can be generated by AI with a fluency that passes casual inspection.

The Turing test — Alan Turing’s 1950 thought experiment about whether a machine could be indistinguishable from a human in conversation — was meant as a philosophical provocation. It has become a practical reality. In many text-based interactions, you genuinely cannot tell whether you are communicating with a person or a program.

Why It Matters

The concern is not about the quality of the text — AI-generated text can be accurate, helpful, and well-structured. The concern is about the assumptions we bring to communication.

When you read something you believe was written by a human, you assume it reflects experience, judgment, and intention. You trust it differently than if you knew it was generated. This trust is based on the assumption that a human author has stake in what they write — reputation, accountability, the desire to communicate something they genuinely believe.

AI has none of these. It generates plausible text based on patterns. It does not believe what it writes. It does not have experience to draw on. It does not care whether the information is accurate, helpful, or harmful.

The Authenticity Question

In some domains, the origin does not matter. A well-formatted API response does not need a human author. A summary of a meeting transcript is valuable regardless of who (or what) wrote it. Accuracy and utility are the relevant criteria.

In other domains, origin matters enormously. A therapy session, a personal apology, a letter of recommendation, a eulogy — these derive their meaning from the human behind them. An AI-generated eulogy is not a eulogy; it is a text that resembles one.

The Middle Ground

Most communication falls somewhere between these extremes. A blog post, a product review, a how-to guide — these are useful if accurate and misleading if not, regardless of authorship. The appropriate response is not to demand human authorship for everything but to develop better frameworks for evaluating information based on its content rather than its assumed origin.

The question “are you a human or a robot?” may be less important than “is this accurate, useful, and honest?” Both humans and AI can fail those tests. Both can pass them. The challenge is building the literacy to evaluate communication on its merits rather than on assumptions about its source.