Let's go invent tomorrow instead of worrying about what happened yesterday - Steve Jobs

In the world of technology, we often find ourselves at the intersection of boundless potential and inflated hype. With the rise of artificial intellifence over the last year we’ve been forced to confront the rapid evolution of computing capabilities we haven’t seen up to this point. We find ourselves on a rapidly evolving and transformative technology climbing a new computing paradigm S-curve.

The question is where do we fall on the curve?

Chatbots came out fast and furious and because we’d never seen anything like it using ChatGPT to come up with car ads in the voice of Spongebob Squarepants was novel, amazing, and in hindsight - pointless.

As I write this I have yet to find a compelling use case of LLMs in my daily flow beyond helping me understand concepts I’m reading about as a study helper.

Can it write an email for me? Basic responses sure but it stops there.

Can it write an excel formula for me to use? Sometimes, but they usually need tweaking.

Can it manipulate a document I upload? Barely.

Can it make slides for me? Very poor quality.

The only AI tools I’ve found actually useful are for creative (non-work) use cases. Generating images, AI baked into tools like Descript, or using Adobe firefly to edit images.

With so much being said about in the media about how AI will replace jobs I’m left feeling about how the same hype was said about Bitcoin & Self-driving cars. It’s just not going to happen - at least not anytime soon. What we will see is companies supercharging their best employees and up-leveling their worst. They’ll expect more production out of fewer people. Lower quality but more quantity - you already see this at companies that are no longer backfilling teams to be as large as they once were.

Will this get better?

Undoubtedly. Over the long term the models will continue to improve. The capabilities and shortcomings will start to fade away and in 10 years our world and work will look very different with these new “co-pilots” but the AGI is 3-5 years out is just nonsense. I hope I’m wrong but if previous technological revolutions are any guide we are probably 15-20 years out from this.

The challenge with technology is always the long-tail. You can get most things to 99% but it is the tricky 1% that grinds and grinds. As adoption increases with scale you learn more edge cases, use cases, and shortcomings. Over that long timescale you refine and refine like a writer revising his work.

The greatest barrier for new adoption of these technologies is the mistakes they make while early adopters can put up with a lot the general pop won’t even submit more than a few google searches for something before they give up. Don’t tell me they are going to “prompt engineer”,

So how do you get by that hurdle? Scale and large distribution.

Flip the switch and overnight expose your capabilities to 1B people. Refine your model from there on what you learn. Tweak your UX until things start to make sense. Experiment at scale until you reach what you hope to achieve.


Bullish. . .

Creative image AI tools like Adobe Firefly, Midjourney. One of the biggest unlocks will be speeding up the tools we already use or reducing the time to learn the numerous complex tools we already have. 3D model this object in Blender? Perfectly remove those unwanted people in my photo?

Small AI workspace improvements like Google workspace w/ AI, Microsoft tools, Gmail autocomplete.

Focus & specialized capabilities like Sierra customer AI

Human generated anything