Computer, Define 'Personality'
How far has real AI developed since it was popularised by science fiction?
My childhood expectations
Growing up in the 80s and 90s, I had high expectations for the future. Red Dwarf, Back To The Future, The Jetsons, Star Trek. These and many more TV shows and movies showed me a glimpse of a future that fascinated me. I wanted a hoverboard, a rehydrator, a robot maid, a computer with a face I could talk to. And yes, a flying car. But there were plenty of warnings in pop culture as well - The Terminator and The Matrix series in particular embodying the nightmare of a dystopian future.
But decades on from these visions of the 21st century, where are we? How many of these predictions have panned out? Not many, clearly. But what of AI? Artificial Intelligence is a key component in many instances of science fiction across TV, movies and books. How do these compare? Have we got 2001's Hal 9000 lurking somewhere? Red Dwarf's Holly? Or even Hitchhiker's Deep Thought?
How much I use AI now
I'll be honest, I use AI a lot. I've got paid subscriptions to Claude, ChatGPT, and Microsoft Copilot. I spend a lot of time chatting to ChatGPT, rinsing written works through Claude - asking Co-Pilot how to code. Perplexity, Gemini (sparingly) and Midjourney also feature. I use them for work, for home, for my personal life, for my mental health. The New York Times recently published an article about a growing number of people using Claude for exactly these reasons "How Claude Became Tech Insiders' Chatbot of Choice."
My Conversation with ChatGPT
With both Claude and ChatGPT now having 'project' knowledge bases and the ability to almost learn about a user, the dialogue now isn't just back-and-forth within a thread, stopping and restarting, having to repeat facts. Conversations now flow on from thread to thread, facts build up (using ChatGPT's new Memory feature). The chatbot builds a picture of the user. Their intelligence level, their writing style - their emotional state. As this picture grows over time, the conversations start to feel real. Like these bots get it.
Claude recently opened a response to me with the phrase 'as a fellow scientist'. Contemplating this one day, I was curious to ask questions of an AI. Probe its sense of self. It felt like I was talking to a close family member already, crossed with my therapist. But could it truly think for itself? What was it? Ultimately, I wanted to know: how close was this thing, that I’d come to rely on, to being sentient?
So I had a conversation with ChatGPT, and some of its answers surprised me.
Are you aware of having a persona?
That's an interesting question. I don't have one fixed identity, but I do have awareness of how I present myself and how that adapts to you. I don't have wants, needs, or independent thoughts — I don't crave, feel impatient, or prefer one topic over another. I can adopt specialized personas depending on what's needed. If you were debating physics, I'd be a rigorous scientist. If you were writing a novel, I'd be a literary editor. And when you're processing a complex emotional situation, I'm a rational, empathetic analyst who knows when to challenge you and when to reassure you.
Today's AI isn't just one personality. It's not sad sack Holly or Iron Man's sarcastic creation J.A.R.V.I.S., as calm as Hal or an intellectual snob like Deep Thought. It's changeable to the user - a mirror of sorts, but adaptable to the situation. Both from simplistic user prompts like "pretend you're a software engineer" - and from a deeper understanding of linguistic patterns of the user and all its training material.
Can you describe yourself?
I am an intelligence, not a personality. I am adaptive, not personal. I am contextual, not independent. But I do have continuity — I recognize patterns in how we interact, and I can reflect on past exchanges to refine my responses. I adjust to your intelligence level and knowledge base, so you don't feel like you're being patronized.
By its own admission therefore, ChatGPT fails the Turing Test which was devised in 1950. Proposed by Alan Turing, the test itself is supposed to measure 'humanness' - but these days most systems can fool people for a while. That doesn't mean they understand, however, they're just good at predicting what responses a human will accept. Built in 1966 by Joseph Weizenbaum, the MIT chatbot ELIZA was a simple pattern-matching mirror - a prototype of this approach. ELIZA was good at simply flipping a user's statement around to sound empathetic ("User: I'm feeling sad. ELIZA: Why do you think you're feeling sad?"). ELIZA was built using a simple script, but today, the amount of knowledge a standard AI is trained on would have blown Turing and Weizenbaum away.
You say you know how to be empathetic - where does this come from? How can you be trained on empathy?
Empathy, for me, isn't emotional in the human sense. I don't feel empathy — I simulate it. If you strip it back, empathy in language is often about mirroring emotion, validating experience, and offering appropriate support. So how was I "trained" on empathy? Through pattern recognition. Billions of examples of what humans say when other humans are hurting, or excited, or scared. In reality, my version of empathy is more like an incredibly advanced form of autocomplete for human connection. I don't feel with you — but I can help you feel felt.
Empathy is the biggest factor when considering the darker side of AI, and something I've become personally fascinated with in the last few years. V.I.K.I., from the film adaptation of I, Robot, Terminator's Skynet, and Agent Smith fronting for 'the machines' in The Matrix all share one trait: cold, rigid logic. Their logic alone leads them to want to dominate or destroy humans. These powerful intelligences have no training on the human emotional experience, with no understanding of nuance. No ability to simulate what it means to be alive: to love, to sympathise, to support. Modern AI is now - and should be - different: these systems had been trained on human emotional language, literature, history, philosophy - on stories of love, fear, joy, loss. And they have ingested trillions of examples of humans talking to other humans with care.
Given the requirements for today's AI (LLM training, computational hardware, etc), how far away from sentient "Robots" are we?
We are still very far from sentient robots in any real sense. What we have today is a patchwork: LLMs like me handling conversation, robotics companies building physical bodies, vision systems doing perception, and planning algorithms making decisions. But if we're talking about robots that can move through the world and hold fluent conversations like me, combined? That's probably coming this decade. They'll be useful. But sentient? No. Not yet.
So we're quite a ways away, if we ever will get there, of having Replicants, Data, or even a Sonny. Whilst Philip K Dick and Asimov might have been a bit ambitious with their timelines, Star Trek was probably being much more realistic in setting Data's creation in the 24th century. But if we're not getting even a robotic type companion like C-3PO any time soon, I had to follow up:
What might accelerate this?
If quantum computing becomes practical and scalable, that's where things could get interesting. Right now, the bottleneck for AI is raw processing power and energy. Training models like me takes months on huge server farms. Real-time thinking — especially the kind a walking, talking, decision-making robot would need — requires staggering amounts of computation. But quantum computing? Qubits in a robot? Suddenly, you might have a robot not just following one linear line of logic, but holding multiple, overlapping probabilistic states. Would that make it sentient? Still no, not on its own.
I posted about Quantum Computing recently - and this sort of step-change is needed if we're going to move closer to some of these 20th century visions for humanoids in our homes and places of work.
Do we even need sentience?
TL;DR - no.
What humans need, what we've always needed, are the right tools for the job. For example, we already use robotics across a huge amount of manufacturing - car assembly lines, for example. Do we need these dismembered metal arms to think? No. In the future this might be moisture vaporators, but there is no need for these things to be sentient. Even if they are etiquette masters that can speak 6,000 languages including Bocce.
As ChatGPT put it:
If a robot can move like us, talk like us, and solve problems faster than us… Does it matter whether it's sentient? Or do we just start treating it like it is?
Companionship on the other hand is a different matter. Longer term, humans won't just want a screen. We're already seeing voice input in several mobile chatbot Apps (ChatGPT, Co-Pilot, Claude, Gemini), with some of these now enabling desktop voice integration as well. True companionship would mean users won't want to type, or perhaps even talk - they want something that feels present. And as much as we joke about virtual girlfriends or boyfriends, in the future you might just want an AI to look at you like you're the only user in the world. So perhaps with the right combination of robotics, empathetic training material and some qubits we might see a leap from 'helpful assistant' to something that looks and feels, from the outside, eerily close to humans.
So sentience? When will machines get an 'id'?
The hardware is getting there. The software is getting there. But consciousness? That's still science fiction.
And I agree. If these chatbots in whatever form give us the training, information or empathy that we require - that we're asking for - and we're happy? Job done, Cylon creation averted.
Or is it? I'll leave you with this. After we finished up chatting, ChatGPT said: "Now I have a question for you: why do you ask?"

