Writing

Toward a Fairer AI Economy

As artificial intelligence continues to evolve, it’s becoming clear that many of today’s most powerful models have been trained on the collective output of the internet: blogs, articles, forums, books, podcasts, and countless other forms of media — all without consent. Unlike traditional creative industries, where reuse often comes with licensing fees, attribution, or royalties, AI’s use of this material has so far operated in a grey area: one that assumes participation without permission and generates value without compensation.

The Sycophant in the Machine: When we test our best ideas on AI, are we learning—or just looking for comfort?

In the old world of offices, ideas were stress-tested in real time. You’d float a notion over coffee. Riff in a meeting. Share a half-formed theory to a colleague by the printer. If the idea had merit, it might gather momentum. If not, it usually died a quiet death. Either way, there was something grounding about the exchange. It was social, improvisational, and crucially, unpredictable.

AI and the Mangrove Problem

Walk along a tropical shoreline and you’ll find thickets of mangroves where the sea meets the land. Their roots form a knotted shelter where juvenile fish hide from predators. Strip those mangroves out and the reef looks fine for a while. But without nurseries, young fish never reach maturity. A few years later, the reef collapses.

In Defence of Enshittification

Every designer has felt it: that pang of frustration when you’re asked to make a product worse. Maybe it’s hiding a feature behind a paywall. Maybe it’s adding extra steps to the sign-up flow to capture more data. Maybe it’s cramming in additional ads in places you know will annoy people, just to squeeze out more revenue. It can feel like the opposite of what we signed up for. We’re here to improve things, not to degrade them.