It's one year today since I started writing Embracing Enigmas. Having put out over 75,000 words! It's been quite a journey so far and I appreciate all of the support along the way!
What I've tried to do with this newsletter is apply the knowledge and experience I’ve acquired from over a decade of solving a myriad of machine learning and artificial intelligence problems, particularly those in a business context. At the same time, I seek to introduce and advance various mental models to help frame, assess, and solve problems. Some of this is to help show you how to apply ML/AI well and some of it is to break down what’s currently happening in the world.
It's pretty interesting to see what resonates with everyone in the community. There were posts I thought would be fantastic but only did ok, and posts that I was unsure of which really resonated with people. And everything in between. Here are most popular posts from the past year:
Why Your Generative AI startup Will Fail - The post that started it all. I still stand by the three factors: data, verification, and ui/ux as determining factors of a successful generative AI product.
We've Entered the Era of Hyper-Personalization - Understanding what the modification and personalization of content will look like when AI can be used to create content for the individual.
Moving at the Speed of Belief - The limiting factor to the acceptance of an AI system is if users trust it. This digs into various risks and how to manage them.
Preoccupation with Optimization - Successful models can lead to over-optimization which makes systems less robust and failures larger.
Ensuring Success in Modeling Projects - Reveals the Model Impact Thesis for aligning business objectives with model targets and determining when to stop a project.
The Future of AI is Partnerships and Acquisitions - How companies are likely to obtain AI systems and supporting assets more through buying as opposed to building. How interesting sources of data can more readily be purchased through acquiring a company instead of through regular data acquisition channels.
Tinkering Part 2: Simulating Outcomes - Understanding why setting a tinkering budget is important and the impact it has as explored through mathematical simulation.
AI Snake Oil - Understanding where AI can actually make improvements and where it can't.
There are also a few concepts I coined in these posts that continue to prove themselves. I received a lot of feedback that these really resonated with people.
The Synecdoche Conundrum: Given a sufficient quantity of AI agents successfully mimicking real human activity on a platform, how can you distinguish between real people and fully digital actors? How can you trust the content on a platform? How do you identify anomalous behavior that has become part of the broader signal?
The Zygote Fallacy is the incorrect belief that the present state of a system will resemble the medium to long term state of the system, particularly in rapidly changing environments. This incorrect belief is further exacerbated by near term changes that appear to confirm a prediction. In short, incorrectly extrapolating the present.
The pieces I put out are meant to stand the test of time. While some of them included or were inspired by current events, they aren’t meant to be consumed and forgotten. In that vein, I'll be creating a print book of all of these posts and sending it out to paid subscribers as a token of appreciation. Feel free to upgrade if you’d like to receive one.
Thanks again for reading this weekly newsletter. I’m happy to hear your thoughts on what has resonated with you or areas you’d like to see me write about next. Looking forward to another great year.