i try to keep tabs on what’s happening in a generative AI space, but there is a lot of noise. most of the content out there exist to sell a product or service (ai related) or generate at dollars through clicks (hi youtube).
looking back at what i’ve bookmarked, there is a handful of thoughtful takes from folks in our industry (technology as a whole, not ai specifically) that stick out
blog post by simon willison on the responsibility of software developers that use any gen ai. i think while it states the obvious (working code should be a bare minimum), i think it does call out a troubling trend of misplaced trust and developers handing over too much responsibility to gen ai tools
https://simonwillison.net/2025/Dec/18/code-proven-to-work
a presentation from gary marcus that I found a refreshing pushback against the gen ai hype. we’re incredibly prone to the eliza effect with llms, so much so we’re willing to bet the farm on this field that is a subset of ai
goat programmer mitchel hashimoto on a vibe session where he implemented a feature in ghostty (i use ghostty!). I find it a good example of a workflow where gen ai can be used productively by an experienced engineer
https://mitchellh.com/writing/non-trivial-vibing
the ex director of teslas ai team “state of ai” type talk mid year to a bunch of startups at ycombinator. he seems to have assumed the role of ELI5 for all ai post-tesla and i am for it. i think the software 2.0, 3.0 ideas are pretty out there but would not be surprised to find new ai products / services heeding his advice here
article by a veteran coder that argues that we’re going through a version of jevons paradox, with the resource being programmers