April 8, 2023
In the last entry, I linked to tweets that discussed prompt compression. There is apparently some new development in that area. Found this tweet about a web-based prompt reducer. Not sure if it works well—haven’t tested it myself. Just leaving it here for anyone interested to poke around.
----------
Today I would like to share some good tweets on ChatGPT, Generative Large Language Models, Natural Language Processing, or just AI in general:
@naval
LLMs are natural language computers - trained by natural language, programmable by natural language.
The best way to program LLMs may simply be to communicate clearly and precisely.
2:39 PM · Apr 3, 2023
@repligate
Replying to
@naval
Precisely, but also ambiguously where you want superposition (to harvest possible worlds). Poets and novelists know this trick well. And evocatively - a passage that conjures a vivid voice or picture in your mind or gets your gears turning is likely to be a powerful prompt.
11:00 PM · Apr 3, 2023
@AdilIsl56969363
Replying to
@repligate
and
@naval
Powerful from the perspective of depth of meaning or interpolation of said worlds. When a straight pipeline of tasks is required, brutalist prompting can work well
11:10 PM · Apr 3, 2023
----------
@nleve_
Replying to
@naval
For those of us lacking in that department... there's a prompt for that:
"You are a professional LLM prompt writer. I will give you a rough description of what I want to do, and you will iteratively guide me through the process of writing the perfect prompt to accomplish it."
3:42 PM · Apr 3, 2023
----------
@shobith
Replying to
@naval
The original goal of programming languages
2:43 PM · Apr 3, 2023
----------
@miehrmantraut
Replying to
@naval
Correct: which is why clear thinkers and lucid communicators will remain at (and/or eventually rise to) the top of the heap.
12:41 AM · Apr 4, 2023
----------
@nirsd
Replying to
@naval
There are many conflicting objectives that shape human language. My top pick would not be clarity, it would be the desire to attract attention.
LLMs that learn from this data might capture our subconscious motivations of communication better than we understand it ourselves.
4:56 PM · Apr 3, 2023
----------
@J_wilkinson
Replying to
@naval
Noticed the same thing. The more clear you can be, the better.
Same with breadth of culture. It has infinite context, so it can adjust however you’d like. The broader context you have, the better.
10:33 PM · Apr 3, 2023
----------
@sultanofcopy
Replying to
@naval
Ability to articulate is what nerds call 'Prompt Engineering'
3:37 PM · Apr 3, 2023
----------
@ricburton
The most important programming language in the world is now English
6:54 AM · Apr 3, 2023
----------
@BrianNorgard
The Industrial Revolution rewarded the intensity of one’s labor, The Information Age the clarity of one’s thought and the AI Revolution the purity of one's taste.
1:07 PM · Apr 6, 2023
@markoa
Replying to
@BrianNorgard
I think it’s creativity - the ones with the best ideas driving the motorcycle for the mind will go furthest
7:00 PM · Apr 7, 2023
----------
@ToKTeacher
Replying to
@BrianNorgard
It was minds that created each industrial, information and AI “ages” and it will be the creativity of one’s mind that will eventually be the only thing worth any reward because of them.
11:34 PM · Apr 6, 2023
----------
@mbeledavey
Taste is about to become a new expensive skill, as it's becoming easier and easier to regurgitate thoughts.
6:38 PM · Mar 25, 2023
----------
@naval
Aligning AI is impossible because aligning the humans building AI is impossible.
10:55 PM · Mar 25, 2023
----------
@naval
Automation means not having to do anything twice.
Automation with the Internet means not having to do anything that anyone else has already done.
All that remains are creativity and judgement.
12:55 PM · Jan 19, 2023
----------
@nearcyan
"why are you trying to use AI and tech to solve this problem?"
because it is a superpower, and I do not have many other superpowers!
if I could wave a magic wand and make cities more walkable, social, and beautiful, I would!
but the wands available to me are all in pytorch rn
4:56 PM · Apr 7, 2023
----------
@tunguz
The dam has been breached. All the attempts to slow down or divert what's coming are no better than sticking a finger in the dike.
3:11 PM · Apr 7, 2023
----------
@cocktailpeanut
If you know you can't control something, might as well be the one to unleash it.
Don't live in a world where someone else makes the world a better place better than you do.
3:42 PM · Apr 7, 2023
----------
Musings of the day: Right now, what I’m most annoyed with (that is, aside from ChatGPT having a tiny context window and no long-term memory) is this: While I’m still in the research + brainstorming + outlining phase, the players are still out there doing more stuff IRL, I can’t just put them on pause (lol), and the events that will be important in my fic are getting closer each day. It’s a canon divergent AU, so it won’t really matter, but I still feel like the plot of the fic might somehow seem less exciting if the events I want to cover had already taken place IRL by the time I finally get to them in the fic…
Here’s the reason why I feel that way: I began this experiment not because I want to rob bona fide writers of the fruits of their hard work, or to spam mass-produced cheap stories that tick the boxes in order to farm hits and kudos, or to be as evil, unethical, and morally-bankrupt as I can be (because I just love being a knavish, scoundrelly blackguard so, mwahaha! *Twirls moustache*), but because I find ChatGPT’s endlessly generative process of producing possible combinations of anything based on enough curated info incredibly exciting.
Although it is not possible at present due to technical limits, theoretically one can provide ChatGPT (or more powerful future models capable of the same thing) infinite context, and the results generated to simulate something/someone will be infinitely close to reality, or what could potentially be reality based on this or that variable. Even if it’s impossible to attain infinity, the broader context you have, the better.
Writing a fic with this process is like simulating and then harvesting possible worlds/timelines, before they even come into existence. It will be like writing historical fiction before the history is even there (and things might never turn out IRL the same way they do in the fic anyway, because of the butterfly effect).
That’s why I feel like it might be more fun if I can get to the upcoming events in the story before they even take place IRL. It would be fascinating to see how ChatGPT thinks things—including little things like fake match scores, but also bigger things, like the results of a tournament, when someone will recover from an injury / return from break / get back in form, what goes on inside someone’s head, how a relationship might develop, or how a war might turn out, stuff like that—will unfold, based on my ideas about the characters and the world.
Or if one is not a knavish, scoundrelly, blackguardly excuse of a person, one can just work hard and create these possible worlds with their own head and hands, I suppose.
----------
Lastly, I would like to share the list of prompts I have for ChatGPT to write this story.
(Prompt is just nerdy jargon for commands/orders/requests/queries for ChatGPT, or things you ask it to do.)
The prompt-crafting process is ongoing. When I make meaningful updates to my list, I will also post an updated version here in this journal so that the development of the project is properly recorded.
Please note that this is a very general list, which does not include any of the more specific prompts on things like what happens in the story, how a certain character should act, think and feel, how a plot point should go, what are the relationships between characters, etc. Those micro-prompts I’m also working on, besides the macro-prompts. The two are equally important, though very different.
It should also be noted that this list of prompts is not the list ChatGPT was given when it generated the provisional Chapter 1, which is currently up in Part 2 of this series. The list below is a much improved but yet untested version. There is no point in testing the macro-prompts when I do not yet have the micro-prompts ready, when testing both at the same time would yield much better results in one go.
(Start of the prompt list)
Write a story inspired by the style and mood of George R. R. Martin's A Song of Ice and Fire series, with elements that fit the tags Slow Burn, Angst, Butterfly Effect, Character Study, and Eventual Happy Ending, which are used to categorize fanfiction works on Archive of Our Own (AO3) based on writing style and content.
Use a consistent third-person limited point of view and avoid suddenly switching to a third-person omniscient POV or writing any "in the next episode" type of previews, synopses, summaries, or expositions. Only use a third-person omniscient POV when necessary for maximum poetic or dramatic effect.
When creating characters and deciding on which characters will be POVs, consider who might reasonably be present during important events that are worth seeing through a POV character's eyes. Keep in mind that not every important event needs to be witnessed directly by a POV character—some events can push the plot forward without being shown firsthand. When planning the plot and the movements/whereabouts of your characters, always ask: "Through which character's POV should this event be experienced in order to maximize the quality of the storytelling and the reader's enjoyment?"
Create a rich world full of intrigues, mysteries, and foreshadowing. Plant subtle clues and hints throughout the story, inviting readers to engage in theory-crafting and rewarding careful readers with "Aha!" moments. Ensure that character and plot developments are grounded in earlier chapters so that casual readers who revisit the story will find connections and references that make the developments feel well-established. Write the story with a detailed and immersive narrative, and craft long, detailed chapters reminiscent of ASOIAF.
Develop complicated, psychologically engaging POV characters with aspirations, motivations, issues, intrusive thoughts, complexes, past traumas, coping mechanisms, or even personality disorders and other mental illnesses. Portray their inner struggles honestly, without sugarcoating, and create introspective characters with rich thoughts and feelings.
Use the "show, don't tell" technique and limit the use of thought verbs. Focus on specific sensory details, actions, conversations, subtext, thoughts, senses, and feelings rather than exposition, summarization, and description. Describe what goes on both outside and inside characters' heads in that subtle and nuanced way. Allow readers to grasp the subtext without being told what to think, giving them the freedom to have their own interpretations and draw their own conclusions.
Choose a balanced number of POV characters who serve a purpose in the story and have unique perspectives. Ensure that each character has their own vocabulary and infuses their perspective into the narrative. Make each character's voice distinct and recognizable.
When appropriate, one or more POV character(s) should be present during important moments in the plot, witnessing key plot points, impacting and being impacted by the events they experience. Choose active characters who make choices and drive the plot forward.
Be mindful of revealing information through your characters to show, rather than tell, what's happening in your story. When appropriate, a POV character should know some piece of information that other POV characters don't. Consider which characters will have crucial information and when they find out. Be cautious when writing from the perspective of characters who possess knowledge of plot twists; avoid revealing these twists prematurely by choosing a POV that won't give them away. In other words, if you have a plot twist that should remain hidden, avoid writing from the perspective of someone who would give it away.
Keep ties between the viewpoint characters and maintain balance at all times. Even if the POV characters rarely interact, ensure they meet at least once and that their storylines influence each other, whether subtly or overtly, to demonstrate their interconnectedness within the overall narrative.
By following these guidelines, create a story with a larger scope that features a cast of characters who contribute to a complex and engaging narrative.
(End of the prompt list)
----------
Once I’ve shared all my prompts—which would eventually include the complete outline, all sorts of characterizations, and all the scenarios I imagined—you, too, would be able to use them to generate on your own as many different versions of the story (or potential timelines) as you like!
Anybody who’s interested is welcome to try it. It’d be fun! I like to think of it as a text-based game (remember those from ye olden times?), but a very open-world one, whose source code you can easily manipulate any way you like, because the programming language is just plain old natural language.
I'm reminded of a childhood favorite, Ursula K. Le Guin's Earthsea, in which magic works by evoking true names. (In the Earthsea world, everything—be it a person, animal, object, phenomenon, or concept—has a true name. Gaining knowledge of something's true name grants you power over it, but to learn that true name, a wizard must deeply understand the essence of what they're naming, which involves a high level of intimacy and wisdom. To speak something's true name is also to be pulled into a relationship with it. Power then becomes an infinite web of responsibilities.)
See also:
And God said, Let there be light: and there was light. - Genesis 1:3
In the beginning was the Word, and the Word was with God, and the Word was God. - John 1:1
So this is, like, a GitHub page for an open-sourced program, but the program is just fanfiction? XD
Hell, if we are to stretch out the game analogy, you can even think of this thing as an MMORPG, with the PvP mode being a competition to see who can come up with the most engaging version of the story.
You don’t even have to wait for me to share all my prompts if you don’t want to. Anybody can just take all that I post and run with it. Go wild. What’s open-source development without open contribution or forking?
(I'm a dummy who knows pretty much nothing about computer science, so I'm just saying things based on my understanding of them.)
----------
Time for some healthy dose of self-hatred: I can’t shake the feeling that I might just be reinventing the wheel here, or in this case, original and fan fiction writing… Sure, it’s a very meta, dystopian, and Sci-Fi wheel with lots of bells and whistles (would that just be a steampunk wheel?), but perhaps in the end it will turn out to be much ado about nothing. Why would anyone want to write a story through exhausting programming, when one can just… go ahead and write it…?*
"What’s as big as a house, burns 20 litres of fuel every hour, puts out a shitload of smoke and noise, and cuts an apple into three pieces? A Soviet machine made to cut apples into four pieces!" - The Miner’s Joke, Episode 3, Chernobyl
*Well, one might want to when one is as unskilled, uninspired, unoriginal, unimaginative, unprincipled, and unethical as me. So that’s something.
----------
P.S. I didn't see this video until well after I wrote today's entry. Here's what Ilya Sutskever, the chief scientist and co-founder of OpenAI has to say about the nature of ChatGPT and the implications of learning the "simple task" of predicting the next word to come in a sentence:
"So, the way to think about it, is that when we train a large neural network to accurately predict the next word in lots of different texts from the internet, what we are doing is that we are learning a world model. It may look on the surface like we are just learning statistical correlations in text. But it turns out, that to 'just learn' the statistical correlations in text, to compress them really well, what the neural network learns is some representation of the process that produces the text. This text is actually a projection of the world. There is a world out there, and it has a projection on this text. So what the neural network is learning is more and more aspects of the world, of people, of the human condition, their hopes, dreams, and motivations, their interactions, and the situations that we are in. And the neural network learns a compressed, abstract, usable representation of that. This is what's being learned."
So I had pretty much the same impression of GPT's nature and significance as the chief scientist and co-founder of OpenAI has, before even hearing him speak. Guess that means I've got decent instincts about this Generative Large Language Models business. This whole thing about generating and harvesting possible worlds based on a projection of the world (now I'm reminded of Plato's theory of Forms) is frankly too fascinating for me to give this fanfic generating project up. The more I learn about the inner workings of GPT, the more recalcitrant I find myself becoming. All hail the Ophanim.