Mind Crunches #21: The Ionian Enchantment Issue
Alternatively: Your annual, non-traditional list of summer reads!
đ Mind Crunches On Generative AI
This is what I was writing about GenAI business models back in February.
I am very skeptic on the long-term viability of all the âcoolâ generative AI startups. The reason is because âmoat buildingâ in generative AI is extremely difficult + infrastructure costs are very high.
Since February, we have witnessed some amazing innovations in GenAI but the tech industry is still debating about what constitutes a good âAI moatâ. A few weeks ago, a very interesting internal Google memo was leaked to the press arguing that neither Google nor Open AI have a strong moat and that open source LLMs will rule the industry. After that leak, a big number of blogposts was also published and some of them (like this and this) were really insightful. I am still not sure that anyone has a good answer on what is the right formula/business model for GenAI but my current high-level thinking is the following:
Consumer-focused/daily usage will favor open-source LLMs. For example, you probably wonât need GPT-4 for your future personal iPhone or Instagram Co-Pilot. LLaMA can do the work.
Horizontal and/or customer-facing enterprise functions that can be optimized with GenAI (like HR, Finance, Customer Service) will favor enterprise-ready LLMs like Azure Open AI. There are three important additional points to be made in this scenario: a) Data Gravity: organizations will pick GenAI models based the cloud provider they are already using for their data storage/analytics. b) Size of enterprise: SMBs may prefer working with GenAI startup LLMs the same way they purchase startup vs big tech SaaS solutions. c) Marketplace Ecosystems: the model of ChatGPT plugins is powerful in terms of distribution economics. I believe we are still in the first inning regarding distribution and pricing of LLMs.
Industry-specific use cases (e.g. healthcare) will need LLMs trained on highly-specific corpus of data like Hippocratic AI.
Data centers running on nuclear-fusion technology to enable an AI-first economy with UBI tokens distributed to cryptographically proven humans sounds like a (not necessarily dystopian) sci-fi scenario but this is apparently part of Sam Altmanâs vision about the future of humanity. We are witnessing the first steps towards such a vision not only because of the tremendous success of Open AI but because Helion and Worldcoin, the other ventures led by Altman, are making moves.
As I had written in my previous post about Horizontal vs Vertical AI, it was just a matter of time till we see the first industry-specific LLMs. Hippocratic AI is the first of many to follow.
One of the âbugsâ of LLMs that created a moral panic a few months ago was âhallucinationsâ. In a nutshell, hallucination is when an LLM generates text that is factually incorrect or nonsensical. There are many initiatives, startups and programs that are fixing this so I expect this to be a non-problem pretty soon but it will be very shortsighted if we discard hallucinations all up. We can treat them as a feature not a bug in our efforts to experiment with new creative primitives and emergent attributes. a16z and USV have some really good essays on how hallucinations can inspire new frameworks for product development and media forms.
AI regulation is still one of the hottest public debates. My thesis (mostly led by the law of unintended consequences) is still the same I had developed in my previous post. I recently came across a 2002 essay from Cass Sunstein regarding the Precautionary Principle and its natural tendency towards inaction. I found it to be an amazingly timely read and I highly recommend you reading it.
When LLMs debate each other, they reach to higher levels of factual accuracy! In other words, LLMs can converse more intelligently than most humans on social media.
Competition in open-source generative AI is fierce and this is great news for everyone involved: users, developers, enterprises, innovation all up. In my previous posts, I had argued that we will soon see emerging business models built on top of open-source models and countries/governments pursuing to build their national LLMs both for economic and geopolitical reasons. As of today, the most advanced open-source LLM is Falcon that was developed by the Technology Innovation Institute in Abu Dhabi. Another proof point of UAE pursuing a leadership role in tech -and policy- innovation.
Hugging Face is one of the most exciting AI startups out there but very few people talk about it. This is a good primer on their business model. When Everybody Is Digging for Gold, Itâs Good To Be in the Pick and Shovel Business.
Marc Andreessenâs (now famous) essay on AI is a good read. But itâs not great. It has some serious argumentative weaknesses and unfortunately it shifts the AI dialogue to a type of culture war which is a priori unproductive.
One of the biggest challenges of modern democracies is the lack of citizensâ participation with the commons. This creates room for groups to fill the gap by grabbing decision making and promoting their agendas. DAOs have been facing similar problems with many members/token holders not being engaged with voting, not proposing news ideas etc A governance LLM robot is a brilliant idea that probably just scratches the surface of how GenAI could revolutionize community engagement/building. Aave is already testing it.
Code Interpreter is magic. Simple as that!
LLMs as community moderators is a use case we should be talking more about.
How AI is impacting science by the excellent Michael Nielsen.
đĄNon-AI Mind Crunches
UK is serious about Web3. Prediction: Crypto will be a key theme on 2024 US Presidential elections.
Itâs great to see more and more thought leaders advocating about the natural fit between AI and Web3. It is something I have been very excited about since last year. Kyle Samaniâs and Tyler Cowenâs posts capture very well the emerging solutions in the intersection of these two technologies. Bonus: I was recently invited to EYâs Innovation Realized event to talk about AI +Web3. Will share the video of the panel conversation in the next post.
I was never a big fan of the âshow me the use caseâ public discussion about Web3 because I find it to be very narrow-minded. History of innovation shows us that novel technologies rarely have clear use cases that maximize value/utility in existing business models. However, I understand that many people want to see the âtangible valueâ that blockchain technologies can bring. This database of blockchain use cases across different verticals is a great initiative. Bonus1: Paul Brodyâs book is launching next week. Bonus2: If you are looking for crypto use cases, Dan Romero will always have the answer!
Lux Capital quarterly report is an amazing read on the science of complexity through a business lens.
The Diff is probably the most popular blog in tech and finance after Marginal Revolution. I recently came across a great 2021 essay on how Machiavelliâs theories, and mostly the theories presented in Discourses and not The Prince, can be a great guide on building sustainable business models in Big Tech.
Action-led worlds is a recurring theme of this newsletter. Cedric Chin recently wrote a great post on how sometimes action, or in other words effectuation, is a much better strategy than âstrategyâ itself, aka trying to predict outcomes or competitor moves.
This primer on Varda Space is a great read about human ingenuity, space manufacturing and âboringâ business models that fuel emerging tech.
Nadia Asparouhova has turned out to be one of my favorite âculture and techâ writers. Her latest essays on tech talent scarcity and Silicon Valleyâs Civil War are excellent.
I have been inspired by Merlin Sheldrake since the day I read âEntangled Lifeâ. His latest paper of how fungi can help in carbon capturing is fascinating.
Stewart Brand is an amazing human being and a personal hero of mine. His latest project focuses on the concept of âmaintenanceâ but itâs also a meta-experiment on what a book actually is. Bonus: It reminded me of the Plurality git-book experiment launched by Glen Weyl and Audrey Tang a few months ago.
Cormac McCarthy recently passed away. âThe Kekule Problemâ is still one of the best essays I have ever read.
Steven Sinofksy on why people love to predict failures. Thread inspired by doomers predicting that Vision Pro will be a disaster.
Bees love to be tutored! âThus, as with birds, humans, and other social learning species, honeybees benefit from observing others of their kind that have experience.â
I have been thinking about this tweet a lot!
Itâs been a few years now that I have stopped giving and asking for advice. There are cases that I am asking for other peopleâs input but I spend a lot of time contextualizing their input to my personal situation. This post explains very well why most advice is useless.
Recommended book: End Times: Elites, Counter-Elites, and the Path of Political Disintegration, Peter Turchin. Bonus: I recently recommended a cliodynamics trained History-GPT idea to Turchin on Twitter and apparently he is already working on something similar. Exciting stuff!
Recommended podcast: Patrick Collison interviews Sam Altman (technically a video rather than a podcast)
Recommended Newsletter: The Lunar Society, Dwarkesh Patel
Quote of the month: âIf you do everything, youâll win.â Lyndon Johnson
Photo of the month: From Thomas Gravanisâ latest travel project in Thailand!