Language Models
Markov Chains
Sequential Data and Recurrent Neural Networks
among the reasons I use large pre-trained language models sparingly in my computer-generated poetry practice is that being able to know whose voices Iβm speaking with isβ¦ actually important, as is being understanding how the output came to have its shape - @aparrish, full thread
LLM Training
Datasets for LLMs
Climate Impact
Code Examples and Implementations
Markov chains
Replicate
Examples will be shared over email due to use of ITP proxy server.
- π¨ Single Prompt + Reply to Llama hosted on Replicate
- π¬ ChatBot Conversations Llama hosted on Replicate. This follows the specification in the Llama 3 Model Card.
Ollama
These examples require working with p5.js locally on your computer and outside of the web editor. Some resources for doing so can be found in my workflow video series.
Assignment
- Read Language models can only write ransom notes by Allison Parrish and review the The Foundation Model Transparency Index.
- Experiment with prompting a language model in some way other than a provided interface (e.g., ChatGPT) and document the results in a blog post. You can use any of the code examples above and/or try a variety of LLMs locally with ollama. Reflect on one or more of the following questions:
- How does the concept of LLMs as βransom notesβ influence your perception of using these models creatively?
- How does the hidden origin of the text that LLMs generate affect your sense of authorship or originality in your creative work?
- How does the metaphor of βcollageβ used to describe LLMs align with or differ from your creative process?
- How would you compare working with an LLM to other forms of text generation, such as using a Markov chain?
- Document your experminents in a blog post and add a link to the Assignment Wiki page. Remember to include visual documentation (screenshots, GIFs, etc)