Today in Science: Training AI to think like us

October 27 2023: Hi all. I'm covering for Andrea Gawrylewski today. Read on for the latest on making AI "think" more like humans, dealing with groundwater declines and mitigating racism in health care.
Robin Lloyd, Contributing Editor
TOP STORIES

Groundwater Redirect

Compensating willing landowners for diverting stormwater runoff into scooped basins may help address global groundwater declines. Research recently published in Nature Water details how scientists chose so-called recharge sites on properties in Pajaro Valley, Calif. Then they calculated the net uptake of stormwater at those spots and compensated the landowners. Property owners with basins over porous grounds (where runoff can sink into the soil) received rebates for their community service of helping to move infiltrated water into the wider groundwater system.

The problem: Farmers and other water users throughout the world often extract groundwater faster than nature can replenish it. This human activity can cause land to sink, streams, wetlands and wells to dry up, and seawater to creep inland under the ground. Extracted groundwater is mostly used to irrigate crops, so declines in availability could lead to a global food crisis. 

The solution: Many small stormwater infiltration projects scattered across a landscape could give stormwater a chance to make it underground before it reaches the sea, thereby recharging the supply of groundwater, says Graham Fogg, a University of California, Davis, professor emeritus of hydrogeology. The new research explains how the approach and the financial incentives could be tweaked to work in a range of communities around the world.
A desilting pond helps slow down water, allowing sediment to settle out before water is directed towards the infiltration basins in the San Gabriel Spreading Grounds in Los Angeles County. Credit: Citizen of the Planet/Universal Images Group via Getty Images

Think Like Us

The key to developing less error-prone chatbots and other AI models could boil down to training them to generalize and be master remixers, like humans, rather than feeding the models oodles of training data. In a recent study, scientists trained an AI to follow the logic of varied made-up grammars or nonsense words. Ultimately, it could understand new configurations of words that it wasn't trained on. In academic terms, the system displayed an ability called  "compositionality," which involves understanding the relationships among a set of components, deciphering never-before-encountered arrays of information and composing complex, original responses. 

Why this is cool: The researchers tested their training idea by running a basic neural network through a set of tasks meant to teach the program how to interpret made-up languages. In the study, nonsense words corresponded to arrays of colorful dots. The model was prompted to produce dots in response to phrases in fake languages. Soon, after rounds of feedback from the researchers, the network was able to respond coherently.

What the experts say: "The insights also could help illuminate the secrets of how AI systems—and humans—learn. "I think that [compositionality] is the most important definition of intelligence," says Paul Smolensky, a cognitive scientist at Johns Hopkins University who was not involved in the new research. "You can go from knowing about the parts to dealing with the whole."
TODAY'S NEWS
Strange patterns in the orbits of small objects (like the long-speculated Planet Nine) in the outer solar system could be explained by gaps in our understanding of gravity rather than an as-yet-unseen new world. | 7 min read
• Millions of young birds die from extreme heat in U.S. farm fields in what researchers say is a growing threat from climate change. | 3 min read 
• People fascinated with true crime podcasts, scary movies or violent sports may be more interested in conspiracy theories. | 4 min read
More News
EXPERT PERSPECTIVES
• Doctors could counteract well-documented racial biases in health care by asking themselves and their colleagues why a patient's race is noted in medical charts and allowing patients to self-identify their racial background, according to an essay by a medical resident at Columbia University Irving Medical Center. These two tools are among seven that Ashley Andreou describes that can mitigate racism in health care and increase providers' awareness of how their choice of words and patient labels affect people seeking care. | 4 min read
More Opinion
ICYMI (Our most-read stories of the week)
• Mouse Mummies Show Life Persists in Mars-like Environment  | 7 min read
• Earth's Latest Vital Signs' Show the Planet Is in Crisis | 4 min read
To Understand Sex, We Need to Ask the Right Questions | 5 min read
It has been heartbreaking this week to see the news of yet another  mass-shooting in the U.S., this time in Lewiston, Maine. If you follow the science, you don't get sucked into the so-called debates about gun control. The science is clear: simple laws to control the use of weapons can prevent killings such as occurred this week in Maine, the editors of Scientific American wrote in a 2022 editorial. More guns do not stop crime. Guns are a public health crisis, just like COVID. Our editors concluded, "We need to become the kind of country that looks at guns for what they are: weapons that kill. And treat them with the kind of respect that insists they be harder to get and safer to use."
Please send any comments, questions or heartwarming stories our way (we could especially use the latter this week): newsletters@sciam.com
—Robin Lloyd, Contributing Editor
Subscribe to this and all of our newsletters here.

Scientific American
One New York Plaza, New York, NY, 10004
Support our mission, subscribe to Scientific American here

Comments

Popular Posts