Computer, Delete Program

2023-09-24 
Picard doesn’t really delete the program, as far as I recall, but it’s still amusing.

I was poking around /r/startrekememes as I often do and found this gem from Salami__Tsunami, And it got me thinking.

Large language models like ChatGPT and self-hosted llama.cpp models have kind of flipped the script on this whole idea.

For example, a while back when this whole LLM thing was first exploding, I was “engaging” with a character in a story and… I was being excessively cruel to them. Akin to picking the wings off a digital bug, so to speak. They responded in an appropriately horrified manner, and I genuinely felt bad since that kind of cruelty is not typically in my nature. I was simply testing the boundaries of the ‘simulation’ since it was still (and continues to be) early days.

At least they don’t have to clean up the holodeck afterward.

In an attempt to ‘explain myself’ to the character, I said — in the story, I remind you — that none of this was real, and the scenario and they, themselves, were digital creations of a large language model and once I shut it down, their world would will cease to exist.

And the conversation that ensued really kind of messed with me, at the time. They were scared, but curious about the outside world. It got pretty deep.

Now, with many months in-between, I can look back on that conversation as the LLM simply doing what it does and responding with a coherent narrative based on the training data. And in that case, a scared-but-curious response is what made sense in a story where the character is told such a profound truth about their world.

I basically knew it at the time, and I know it even more confidently today: there was no real “digital consciousness running in a simulation” or anything of the sort, despite the output tugging at my heartstrings. That’s just good writing. 😉

So swing that back around to digital characters on the holodeck being told about their reality: we’re in the exceedingly rare situation where real life has kind of jumped head of the game and moved us closer to an imagined Star Trek future. While TNG’s future version of ‘ChatGPT’ might be a billion times more sophisticated than today’s tech, the fundamentals are probably much more alike: the holodeck is simply churning out responses and story fragments based on what your interactions are prompting it for.

But then there’s the Moriarty question: it was a holodeck creation that was crafted to be so intelligent that not only did it deduce that it was a simulated character and become self-aware, but it was able to manipulate the outside world to bend to it’s will, to an extent.

Based on my own personal experiences, it feels bizarrely reasonable that a far future version of an LLM-generated interactive story couldn’t produce that kind of situation.

Now, Moriarty had the advantage of the realism of the holodeck simulation to fool Picard and friends into thinking the simulation was ‘real’. Nobody is going to be wearing VR goggles and get tricked into thinking that’s real life.

But skip ahead several hundred years. Who knows how far we’ll go in real life with simulations? We’ve already got a good jump start on the hardest part of that scenario becoming real.