Hello! I’m in the midst of a longer post but in the meantime I thought I’d round up a few thoughts I had on Twitter and share them here with commentary.

Commentary
I got all giddy when I read Ben Thompson’s piece on AI’s impact on the big five tech companies. About half-a-month before Ben published his piece, I had the same thought as him with regards to AI's implications for Google. It feels validating knowing that I’ve been thinking along the save wavelength as Ben who’s a widely respected tech and business analyst with the ear of pretty much all big tech CEOs. If you want to see the expanded version of the thought surfaced by my tweet, I encourage you to read AI and the Big Five.
Quick Thread Summary
Products like ChatGPT have shown us the power of using context within a chat for coaching AI toward a useful outcome.
Providing the context to focus AI requires micromanaging, which doesn't scale well.
AI needs episodic memory1 to become more autonomous. It needs the ability to load scenario specific information into the prompt without user intervention.
New techniques such as DeepMind’s RETRO and Stanford’s Demonstrate-Search-Predict may provide this.
Writing will become more important in the age of AI. To effectively delegate work to AI, good documentation is needed.
AI is already learning to use common software and browse the web to help people do actual work.
Deepfake videos and voice generators might also make AI appear more human.
Ultimately, in the future your next coworker might just be an AI.
Commentary
Although I started the thread with a clickbaity hook, the main idea I wanted to explore is memory. I think memory management is the next major piece that needs to crack in order to get us one big step closer to general intelligence. AI is built on metaphors for human intelligence. The key insight that powers most modern large language models is that “Attention is All You Need”. By giving large language models the ability to pay specific attention to the most meaningful words or tokens in a piece of text, it is able to produce massively better results. I’m excited to continue exploring memory metaphors within AI.


The link in the tweet above: AI-generated answers temporarily banned on coding Q&A site Stack Overflow
Commentary
Some call it the Dark Forest Theory of the web and others think of it as the Dead Internet Theory. Regardless of what its called, there’s this fear that AI is a cancer that will spread across the web until it crowds out human produced content. I don’t know to what extent this is true but I do believe that human produced content will slowly become a premium as the ratio of AI generated content increases relative to human produced content. Places with strong community moderation like Reddit will continue to be a refuge for human content and AI companies will be eager for the privilege of training their models on human content.
Private information stores will also continue to be valuable. Although I explore this idea in my last post for companies in a consumer facing context, there’s a lot of great internal applications for AI as well. Companies that produce their own corpus of text will now have the ability to make sense of it with the help of AI models. Every company can run their own private search engine on their own documentation or create a ChatGPT like experience for industry specific knowledge.
That’s all for now. Have a restful weekend!
AI products today typically only have semantic memory. Semantic memory is general knowledge such as concepts, facts, and the meanings of words that make up language. Episodic memory is the ability to retrieve episodes from a personally experienced past.