You’ve probably seen this (tweet? EXcrete?) going around whatever social media platform you’re on (the original, uh, missive is here).
It’s a fantastic point, pithily put.
Here’s a key question for anyone developing a newfangled solution (such as AI): Is the problem we’re solving a problem that actually needs solving?
Maciejewska’s1 direction toward taking off our plates the things we don’t want to do is a great one, but I want to propose a different direction as a rule of thumb for first priority:
1. AI should address problems that we don’t already have (even imperfect) solutions for.
Things like: solving protein folding, making fusion power generation feasible at an industrial scale, understanding animal communication, etc. These are transformative problems to solve because they would bring us an entirely new capability. (Protein folding is well underway since researchers have had groundbreaking success with using AI to tackle the problem.)
After that first priority, I suggest two other rules of thumb for prioritizing targets for AI solutions:
2. AI should address activities that AI can make significantly safer for people by doing them better.
We humans are fallible. And clearly so is AI, but we stand to benefit greatly by leveraging technology where its strengths offset our weaknesses. For example, computers can keep at repetitive or dull tasks indefinitely where humans get bored and nod off. Autopiloting long flights is a great improvement over having humans manually alert the entire flight. Machines can be more precise than our wobbly hands, and see things and patterns we can’t see, giving AI great potential for improving surgery.
3. AI should take off our plates the things we don’t want to do.
Depending on what kind of problems you solve and how scalable the solution is, these could be transformative, too. But their transformations could also entail a lot of pain.
Consider, if machines could handle domestic chores, affordable at a broad consumer scale, that would certainly free up a lot of people’s time—while taking away a lot of lower-income jobs. It probably wouldn’t be in your and my home any time soon. I imagine the robots would be very expensive initially. Rather, it’d probably show up first in places like hotels and hospitals, eliminating many housekeeping jobs in those industries. It’s a well-worn economic argument that such disruption frees people up to do better jobs, but the transition needs to be managed well or else it comes with a lot hardship and turmoil.
And here’s the key criteria to add onto this category of targets that hasn’t come up before when historically the disrupted industries have been agricultural and blue collar in nature: if you’re “freeing people up to do other jobs”, the jobs you’re freeing people up from shouldn’t be activities that people dream about being able to earn enough doing to leave their day jobs.
Category #1 seems like the kind of things that would be most impactful to people’s (and other living things’) lives. But it’s large language models that are getting absurd amounts of money thrown at them, and not just once over, but with multiple competitors all going at the same thing. There are many reasons for this, which ChatGPT explains well actually, but fundamentally, transformative uses of AI have significant scientific and technical hurdles to clear, putting them further away to develop and commercialize, ultimately leading to greater uncertainty and less eager investment.
But let’s take a look at this last point a little more. It reflects a risk-aversion in tech execs, investors, and whoever else has influence on these investment decisions. Using LLMs to do existing functions—like customer service, “content” creation, marketing, personal assistants, unreliable interns—more cheaply is a safe bet to make more money. But it’s not a strategy that’s going to knock anything out of the park. This is reminiscent of recent discussions about how Hollywood’s risk aversion and focus on making money underlies all the boring movies.
But alright, fair enough, private companies and investors (in our current culture) tend to prioritize making money above all, and this is what we lean on government for: to take on the risks making the big bets for groundbreaking advances via grants, loans, and directly producing research that may not pay off for years or maybe ever. It’s how we got GPS, the Internet, Tesla2, and much more.
Hoo, good thing this is an essay and not a speech… ChatGPT tells me this is how to pronounce the name:
The Polish surname "Maciejewska" is pronounced approximately as "mah-chee-YEV-ska." Here's a breakdown to help with the pronunciation:
Ma: like "ma" in "mama"
cie: pronounced "chee"
jew: pronounced "yev"
ska: pronounced "ska" (similar to "ska" in "skate")
Putting it all together: "mah-chee-YEV-ska."
Would not have gotten that right! Ok, add that to the “good use of AI” list.
I’m sure everyone, both those who feel positively and those who feel negatively toward Tesla, has mixed feelings about the fact that government money saved the company.
For me it would be the best search engine ever. “Oh you already crawled and contextualized every bit on every image done by a human? Then please show me all the neopunk yellow jackets on artstation, tumblr and deviantart, oh and only show me the puffy crop top ones”
right here in substack there has been several conversation on people using ai images for their posts, and the best solution is to invite them to please use an existing image and link/credit the creator. The image is already based on some artists work so you would be doing everyone a favor
just like pinterest made their house on curation, we can use ai for the greater good, search through all the troves of work wonderful people have put trillions of hours into making.
“I want to see a movie about noir superheros” boom you get Nightwatch and many many more wonderful shorts that we have no clue exist, but some wonderful passionate artists gave their lifetime to made them.