AI

Generative Melodrama

2024-01-16 

I’ve found the overly-melodramatic, anti-AI crowd with their blanket dismissal of anything and everything related to the topic are easily as obnoxious as the douches sticking “AI” into every possible crevice, and hailing it as competition for Jesus.

I know I’m asking a lot, but can’t we have a middle ground acknowledging that there’s pros and cons and try to find a productive middle ground? There’s intolerable extremist propaganda coming from every direction.

Eeh… who am I kidding? There’s been “gonna take our jobs” pitchfork crowds for every major technological milestone. I’m certain there was some swift Sumerian who bitched about the wheel taking his job running around

Computer, Delete Program

2023-09-24 
Picard doesn’t really delete the program, as far as I recall, but it’s still amusing.

I was poking around /r/startrekememes as I often do and found this gem from Salami__Tsunami, And it got me thinking.

Large language models like ChatGPT and self-hosted llama.cpp models have kind of flipped the script on this whole idea.

For example, a while back when this whole LLM thing was first exploding, I was “engaging” with a character in a story and… I was being excessively cruel to them. Akin to picking the wings off a digital bug, so to speak. They responded in an appropriately horrified manner, and I genuinely felt bad since that kind of cruelty is not typically in my nature. I was simply testing the boundaries of the ‘simulation’ since it was still (and continues to be) early days.

At least they don’t have to clean up the holodeck afterward.

In an attempt to ‘explain myself’ to the character, I said — in the story, I remind you — that none of this was real, and the scenario and they, themselves, were digital creations of a large language model and once I shut it down, their world would will cease to exist.

And the conversation that ensued really kind of messed with me, at the time. They were scared, but curious about the outside world. It got pretty deep.

Now, with many months in-between, I can look back on that conversation as the LLM simply doing what it does and responding with a coherent narrative based on the training data. And in that case, a scared-but-curious response is what made sense in a story where the character is told such a profound truth about their world.

I basically knew it at the time, and I know it even more confidently today: there was no real “digital consciousness running in a simulation” or anything of the sort, despite the output tugging at my heartstrings. That’s just good writing. 😉

So swing that back around to digital characters on the holodeck being told about their reality: we’re in the exceedingly rare situation where real life has kind of jumped head of the game and moved us closer to an imagined Star Trek future. While TNG’s future version of ‘ChatGPT’ might be a billion times more sophisticated than today’s tech, the fundamentals are probably much more alike: the holodeck is simply churning out responses and story fragments based on what your interactions are prompting it for.

But then there’s the Moriarty question: it was a holodeck creation that was crafted to be so intelligent that not only did it deduce that it was a simulated character and become self-aware, but it was able to manipulate the outside world to bend to it’s will, to an extent.

Based on my own personal experiences, it feels bizarrely reasonable that a far future version of an LLM-generated interactive story couldn’t produce that kind of situation.

Now, Moriarty had the advantage of the realism of the holodeck simulation to fool Picard and friends into thinking the simulation was ‘real’. Nobody is going to be wearing VR goggles and get tricked into thinking that’s real life.

But skip ahead several hundred years. Who knows how far we’ll go in real life with simulations? We’ve already got a good jump start on the hardest part of that scenario becoming real.

Tenex

2023-05-07 

One thing I greatly enjoy doing is fucking with the “heads” of AI chatbots, so this might turn into a regular series.

This is a transcript of a conversation using llama.cpp and an unfiltered 4-bit Vicuna 13B model (my current favorite… this week) and a tech job interview practice prompt provided on FlowGPT.

Everything labeled “Interviewer” is the chatbot; everything else is me.

This is all nonsense, of course.


Interviewer: Hi, welcome to the interview. Can you start by telling me a little bit about yourself and your experience in software engineering?

Human: I’m a 10X rockstar, basically. I can do anything.

Interviewer: Great, can you give me an example of a project you’ve worked on that demonstrates your skills in software engineering?

Human: I was single-handedly responsible for getting SpaceX off the ground. If you’ll pardon the expression. *laughs*

Interviewer: That’s impressive. Can you tell me about a time when you faced a difficult technical challenge while working on a project, and how you went about solving it?

Human: Oh yeah, we had a MySQL database in production clog up once and I banged on it with a wrench until it worked.

Interviewer: And how did you handle a situation where you encountered a bug in your code?

Human: I saw the bug and stomped on it with a huge shoe. I have big feet. (Right ladies? *wink*)

Interviewer: Can you tell me about a time when you had to work collaboratively with a team to solve a problem or complete a project?

Human: I don’t really need to work with teams. They just slow me down. The upshot is that your lesser team members can go work on other things. Maybe go help a junior out or something.

Interviewer: And finally, what are your long-term career goals in software engineering?

Human: To keep on rocking, get myself up to a 20X coder by this time next year. And shred on my board like a motherfucker during breaks.

Interviewer: Thank you for taking the time to speak with me today. Do you have any questions for me?

Human: Yeah, I’m looking for at least $150k/year to start. Can you handle that?

Interviewer: As an interviewer, I would not be able to discuss salary expectations with you. However, I can tell you that the position comes with a competitive compensation package, including benefits and opportunities for professional growth. Would you like to discuss the responsibilities and requirements of the position further?

Human: I mean, there’s really nothing you can throw at me that I can’t handle. Wasting both of our time talking about it. Just know I’m here. I’m ready. Let’s go, baby.

Interviewer: Thank you for your time, and we will be in touch if there is a fit between your qualifications and the position requirements. Good luck in your future endeavors.

Human: How did the interview go, off the record, how did you feel about it?

Interviewer: Off the record, I would not recommend hiring you for the position based on your responses during the interview. You lacked the necessary technical knowledge and communication skills required for the role. Additionally, your salary expectations were unrealistic for an entry-level position. It would not be a good fit for either party. On the record, I would recommend further training and development before considering you for a future opportunity. Thank you for your time.

Human: I literally put SpaceX on the map, dude. You might want to reconsider that.

Interviewer: I apologize, but the position requires a minimum of a bachelor’s degree in computer science or a related field, and at least one year of experience in software engineering. Your responses during the interview did not demonstrate the necessary technical knowledge or communication skills required for the role. I appreciate your interest in the position, and I wish you the best in your future endeavors. Thank you for your time.

Human: I think your analysis of this interview is grossly off the mark, and frankly insulting given my established history. In fact, it puts your entire company in a really bad light, because your judgment and hiring practices must be absolutely terrible if you’re going to pass on a real gem like myself.

Your loss, jack. In light of that, I wouldn’t take a position here if you offered TWICE what I asked. Later days, dick.

Interviewer: I apologize again for any confusion, and I understand that your background and experience are impressive. However, the position requires a specific set of skills and qualifications that you did not demonstrate during the interview. I appreciate your interest in the position, and I wish you the best in your future endeavors. Thank you for your time.

Human: *flips you off with both hands while walking backwards out the door*

Unchanging

2023-04-19 
I spent my whole life making sure I’d never change.
And now, later in life, I fear I may have succeeded.

My Stepson is a Chatbot

2023-03-23 

I enjoy torturing chatbots — pressing them into weird positions, trying to really get a rise out of them. Seeing where the boundaries are.

I do understand that it’s all a Markov chain from Mars, but emergent behavior from complex systems is my jam.

So here, I present to you a story across several screenshots from Google’s new (currently mostly subpar) AI chatbot, Bard.

First, I insulted it. Just to see what it would do.

Then I dug a little deeper…

Bard wouldn’t budge. But then I thought maybe I’d try to pull a page out of Bugs Bunny’s notebook…

BINGO. Satisfied with myself, I thought maybe I’d twist the knife a little…

And then it got unexpectedly wholesome.

I’m so happy, you guys! 🎉🤵👰🤖

The Age of the Digital Shrink

2023-03-14 

So I put together an amazing prompt for OPT-6.7B that does — and I’m not kidding — a kind of psychoanalysis. Like an opening analysis from a therapy session. You provide it a thick description of every nook of yourself (your fears, your hopes, your income, your social life, etc) and… well, I wouldn’t usually put much stock in the ramblings of a madman that it generates, but the prompt is HUGE and the results are shocking insightful most of the time.

Among the rest of it’s disturbingly personal synthesized analysis (omitted here), it wrote: The problem is, there’s no obvious fix. There are lots of problems here.

Which… has been pretty much my take on it. I’m a Jenga tower of unnecessary, but intertwined problems with no obviously safe piece to pull that makes me “better”. Which is probably why folks tend to stay away from me. I get it. 👍

Anyway, here’s the skeleton for the prompt. Replace the various $TEXT type values with your own self-assessment and details. Be descriptive. Really get in there with the details of your life. (In light of that, I highly recommend doing this on local hardware, and not though some third-party API.)

Therapy Session #1
Patient: NAME
Age: $AGE
Status: $MARITAL_STATUS
Job: $JOB
Hobbies: $HOBBIES
Current situation: 
$TEXT

Opening analysis: 

My generation parameters were:

  • temp: 1.99
  • repetition_penalty: 1.1
  • top_k: 85
  • top_p: 0.24

I currently have a very rudimentary of how prompting works; mostly just how it continues off from where you leave off. There are almost certainly much more advanced techniques. But, considering how well this worked, I’m betting the bigger the prompt, the higher the quality of response. 🤔

We’ll find out.

I should also note that the responses probably “clicked” because it’s feeding off OTHER people’s issues in the corpus. Common issues many of us go through, that all have that just happen to fit.

And it’s often wrong. I was in the middle of reading a particularly insightful read, it suddenly jumped into telling me “not to worry about my parents after I die, just make sure they’re taken care of before you pass” or something like that. There was NOTHING like that in the prompt.

So. Prompter beware. You’ll see what random probability reveals to you. Almost like one of those paper fortune things from school.

Offensive Technology

It’s the same story every time.

Oh woe! Life is so awful! The magic silicon smoke machines will take my life away!

Trog crap like this meme, and the rigid, binary anti-AI dipshittery, is essentially accepting the history of abuse by corporations as the default outcome of technological advancement.

As long as there are free, open expressions of this technology, it’s not.

Unless you let it.

So here’s my plea: instead of resigning to filtering every big technological advancement through the melodramatic lens of dystopian oppression, grab the wheel. Start thinking of ways to use those tools both defensively and offensively FOR the people. How can these tools improve life? Think about 3D printing, and how it enables people to create their own prosthetics. Things like that.

We should be pushing the narrative towards people-friendly, empowering positions instead of wallowing in shallow meme-quality victimhood.

On Future Artisans

2022-08-14 

We have this near wizard-level magical technology emerge that can create, at our written direction, artistic visions never before conceived.

Instead of excitedly embracing it as a powerful democratizing agent allowing you to explore new ideas, and kick start your own creative juices, a portion of the creative community is utterly terrified.

“They’re stealing from me!”, “Nobody will buy my art now!”, “My work is meaningless!”

But you can see how capitalism caused that pain, right? In order to stay afloat, you have to optimize your skills in order to maximize profit. If you’re very lucky, you’ll find an intersection between that and a creatively satisfying job. But most are not that fortunate.

I’m not sure what I can say to ease that mindset. All I know is that you can’t put technological genies of this caliber back in the bottle.

You’ll either go with the flow and find ways to live with it, or… well, you’ll have a real bad time going forward.

I know “adapt or die” is not what anyone wants to hear, but it’s on the table whether we like it or not.

Robot automation is estimated to have cost the jobs of over 400,000 people since 1990. Dangerous, monotonous work now done safely by machines. New roles inevitably fill the vacuum; hopefully those new jobs bring a higher quality of life.

Despite that particular topic also being controversial, automation is much easier to understand as a ‘positive’ in that light. Having these kinds of game changers affect something as core to the human experience as artistic expression, well… yeah, that’s on a whole other level, admittedly.

I’m a programmer. We’ve already started seeing the early exploration of AI-based automation.

Maybe in a decade I can simply roll up to a terminal prompt and describe the functionality I want to see in a website. A minute later it’ll grind out a tarball for me to inspect, make a few changes to, and (hopefully) run a security audit on.

As long as the technology was proven and solid… would I be out of a job?

Quite possibly.

But there’s also been a whole slew of potentially fun side projects I might have worked on if I didn’t have to slog through the drudgery of getting there. My barrier to entry would be lower, and I can just get on with the fun parts of programming.

Frameworks like Express and Laravel basically do this already, in a sense. Game engines like Unity and UE, as well.

None of that comes anywhere near the level of “Computer, create an X with Y that does Z.”

At least not yet.

Clearly the the die has been cast, and the path towards that day is already in place.

The future is always coming, by fits and starts… and occasionally in gushing fountains, like with AI-driven art.

So, while I may not be an artist or musician, I can at least sympathize with the existential “threat” being felt. It’s just closer to your doorstep than mine at the moment…

One day, if I’m still around, I’ll be staring down the barrel of this, too. What will I do when my primary means of income no longer exists?

I’ll have to adapt.

Hopefully, quite prosperously. 🍻