Scientists have the power to genetically engineer many types of animals. Most Americans think it’s OK to alter or insert genes in animals and insects — provided it’s done in the interest of human health, according to a poll released August 16 from the Pew Research Center. The findings are similar to those from an earlier Pew survey, which found that a majority of Americans are fine with tweaking a baby’s genes, but only if it is to prevent disease.
In the new survey, a majority of respondents support engineering animals for the benefit of human health. For instance, 70 percent approve of preventing the spread of disease by reducing mosquitoes’ fertility (SN Online: 8/5/16), and 57 percent are on board with engineering animals to be organ donors for humans (SN: 11/2/17, p. 15). But people are not as comfortable with genetically manipulating animals for cosmetic or convenience reasons. A majority of respondents — 55 percent — object to genetically tweaking animal to produce more nutritious meat, saying that crosses a line. The results, based on a survey of 2,537 U.S. adults from April 23 to May 6, reveal the mixed feelings people have about this emerging biotechnology.
Harvesting organs Of the 41 percent opposed to genetically engineering animals to grow organs or tissue for human transplant, 21 percent said they worried about harm to the animals. Only 16 percent said they were worried about potential human health risks.
Extinct means extinct Bringing species back from extinction didn’t sit well with 67 percent of respondents, who balked at the idea of altering a living species to revive one no longer in existence (SN: 10/28/17, p. 28). Of that group,18 percent said species are extinct for a reason; 23 percent said it messes with nature or God’s plan; and 14 percent said it’s a waste of resources. Only 4 percent said they were afraid it would create a “Jurassic Park scenario,” in which the de-extinct animals would run amok and kill people.
No to the glow Engineering aquarium fish to make them glow got the thumbs down from 77 percent of respondents, who gave a variety of reasons: 48 percent of that group said it was a waste of resources, including 23 percent who said it offered no benefit to people or the fish and 13 percent who called it “frivolous.” Only 2 percent worried about how the fish could affect ecosystems if they were released into the wild.
Messing with mosquitoes Though a majority supported engineering mosquitoes to improve public health, 29 percent disapproved. Of those, 23 percent worried about potential effects on other species, 23 percent were concerned about upsetting nature’s balance and 18 percent mentioned unintended consequences. Only 2 percent expressed worry about making mosquitoes extinct.
The Puerto Rican government has officially updated its tally of lives lost to Hurricane Maria to an estimated 2,975. That number, reported August 28 in a government-commissioned study by George Washington University in Washington D.C., dwarfs the island’s previous count of 64, which officials later acknowledged was far too low.
The study covers September 2017 through February 2018 — two months longer than other recent estimates for the post-hurricane death toll (SN Online: 8/2/18). An absence of clear guidelines for how to certify deaths during a disaster, the researchers found, meant many death certificates didn’t reflect the role of the Category 5 storm, which hit the island on September 20, 2017. Based on mortality data including death certificates, the new 2,975 estimate falls between two other recent counts. One study in May estimated 4,645 deaths from the hurricane through December 2017 by surveying nearly 3,300 randomly selected households in January and February (SN Online: 5/29/18). Another study in August counted 1,139 excess deaths during the same period, by analyzing and comparing monthly death counts from January 2010 through December 2017.
In a report to Congress, a draft of which was published in July, Puerto Rican officials unofficially acknowledged that the death toll was likely far higher than 64, based on its counting roughly 1,427 more deaths in the four months after the storm than in the same period in the previous four years.
While different methodologies have resulted in different death estimates, the new report “highlights that the humanitarian crisis in Puerto Rico continued until February 2018,” says Alexis Santos, a demographer at Penn State University who was not involved in the new report but was a coauthor of the August study. “All we can do is try to help those still suffering in Puerto Rico.”
A new artificial intelligence is turning its big brain to mapping earthquake aftershocks.
Scientists trained an artificial neural network to study the spatial relationships between more than 130,000 main earthquakes and their aftershocks. In tests, the AI was much better at predicting the locations of aftershocks than traditional methods that many seismologists use, the team reports in the Aug. 30 Nature.
Although it’s not possible to predict where and when an earthquake will happen, seismologists do know a few things about aftershocks. “We’ve known for a long time that they will cluster spatially and decay over time,” says geophysicist Susan Hough of the U.S. Geological Survey in Pasadena, Calif., who was not an author on the new study. Then, in 1992, a series of temblors prompted a flurry of interest in trying to map out where exactly an aftershock might occur, based on how a mainshock might shift stresses on other faults. First, a magnitude 7.3 earthquake shook the Southern California town of Landers and other nearby desert communities. Three hours later, a magnitude 6.5 aftershock struck the more populous area of Big Bear, about 35 kilometers away. The next day, a magnitude 5.7 aftershock struck near Yucca Mountain, Nev., nearly 300 kilometers away.
“After 1992, people were looking to understand [aftershock] patterns in more detail,” Hough says. Researchers began trying to distill the complicated stress change patterns using different criteria. The most used criterion, the “Coulomb failure stress change,” depends on fault orientations.
But fault orientations in the subsurface can be as complicated as a three-dimensional crazy quilt, and stresses can push on the faults from many different directions at once. Imagine a book sitting on a table: Shear stress pushes the book sideways, and might cause it to slide to the left or right. Normal stress pushes downward on the book, perpendicular to the table, so that it wouldn’t budge. Such a thorny computational problem may be tailor-made for a neural network, Hough says. Earthquake scientist Phoebe DeVries of Harvard University and colleagues, including a Cambridge, Mass.–based team from Google AI, fed data on more than 130,000 mainshock-aftershock pairs into an AI. Those data included not only locations and magnitudes, but also different measures of changes in stress on the faults from the quakes. The AI learned from the data to determine how likely an aftershock was to occur in a given place, and then the team tested how well the system could actually pinpoint aftershock locations using data from another 30,000 mainshock-aftershock pairs.
The artificial intelligence system consistently predicted aftershock locations much better than the Coulomb failure criterion, the researchers found. That’s because the AI’s results were strongly correlated with other measures of stress change, such as the maximum amount of change in shear stress on a fault, the scientists say.
“It’s a cool study and might pave the way for future work to improve forecasting,” Hough says. But the study focuses just on static stresses, which are permanent shifts in stress due to a quake. Aftershocks may also be triggered by a more ephemeral source of stress known as dynamic stress, produced by a quake’s rumbling through the ground, she says.
Another question is whether a forecast system that used such an AI could leap into action quickly enough after a quake for its aftershock predictions to be helpful. The predictions in the new study benefited from a lot of information about which faults slipped and by how much. In the immediate aftermath of a big quake, such data wouldn’t be available for at least a day.
Using a neural network to study the aftershock problem “is a really nice, efficient approach,” says seismologist Lucy Jones of Caltech and the founder of the Dr. Lucy Jones Center for Science and Society, based in Los Angeles (SN: 3/31/18, p. 26).
But she agrees with Hough that, to help with risk management, the system would need to be able to respond more rapidly. The rule of thumb is that “whatever number of aftershocks you have on the first day, you get half of that on the second day, and so on,” says Jones, who was not involved in the new study. “A week after the earthquake, the majority of aftershocks have already happened.”
Power systems transcended kinship in medieval Europe. A burial site in southern Germany contains members of a powerful warrior family who journeyed widely to find recruits to join the household and support a post-Roman kingdom, a new study suggests.
Thirteen individuals interred at Niederstotzingen belonged to the Alemanni, a confederation of Germanic tribes that were conquered by and integrated into a neighboring kingdom of the Frankish people starting around 1,400 years ago, researchers say. Excavations in 1962 revealed the bodies, which the team estimates were buried from roughly 580 to 630, along with various weapons, armor, jewelry, bridle gear and the remains of three horses. DNA extracted from the German skeletons identified 11 as probably males, biomolecular archaeologist Niall O’Sullivan of Eurac Research’s Institute for Mummy Studies in Bolzano, Italy, and colleagues report online September 5 in Science Advances. Six skeletons displayed genetic ties to modern northern and eastern Europeans. All but one of those six were closely related, including a father and two of his sons. Chemical analyses of tooth enamel, which provides regional signals of early childhood diet, indicated that these individuals grew up near Niederstotzingen.
Artifacts from three foreign medieval European cultures lay in the graves of four local males. Weapons and other objects typical of the Franks accompanied one man — the previously mentioned father — who may have headed the power household, the researchers suspect. Another three individuals buried at the site were genetically unrelated to anyone else. Two possessed DNA like that of present-day Mediterranean people. All had spent their childhoods in other regions, tooth data suggest.
The new results support previous suggestions that, shortly after the Roman Empire’s fall in the fifth century (SN: 4/29/17, p. 18), the Frankish Empire maintained power throughout central Europe for several centuries by establishing mobile, warrior households that enforced obedience to the ruler.
A new hexagon has emerged high in the skies over Saturn’s north pole.
As spring turned to summer in the planet’s northern hemisphere, a six-sided vortex appeared in the stratosphere. Surprisingly, the polar polygon seems to mirror the famous hexagonal cyclone that swirls in the clouds hundreds of kilometers below, researchers report online September 3 in Nature Communications.
When NASA’s Cassini spacecraft arrived at Saturn in 2004 — during summer in the southern hemisphere — the probe spied a similar vortex in the stratosphere over the south pole, though that one was shaped more like a plain old circle. As summer gradually turned to autumn, that vortex vanished. Now, planetary scientist Leigh Fletcher at the University of Leicester in England and colleagues report that Cassini caught a new vortex growing in the north during the spacecraft’s final years. Relying on infrared maps of the atmosphere, the team found that from 2014 to 2017 a warm, swirling mass of air started developing over the north pole. That wasn’t surprising — but the six-sided shape came as a bit of a shock.
The shape suggests that the underlying hexagon somehow controls what happens in the stratosphere. These sorts of insights could help researchers understand how energy moves around in other planetary atmospheres.
Unfortunately, Cassini is no longer around — it dove into Saturn last year (SN: 9/2/17, p. 16). But Earth-based telescopes will keep an eye on the storm to see how it changes along with Saturn’s seasons.
Life on a volcanic isle appears to give barn owls a blush of red-brown plumage.
The high-sulfur environment on such islands influences the birds’ coloration, researchers report March 13 in the Journal of Biogeography. Darker feathers might also play a role in detoxifying harmful sulfur-based chemicals or help the owls blend in better with the islands’ humid, shadowy forest backdrop. The findings are among the first evidence that environmental sources of sulfur — such as the soil — can influence the color of integument like fur or feathers. Barn owls (Tyto alba) are found across most continents and on many islands. The owls’ plumage varies considerably across the globe, with bellies ranging from almost completely white to a much darker copper color.
In 2021, evolutionary ecologist Andrea Romano and his colleagues discovered that barn owls on some islands are paler than mainland populations. “However, such a difference disappears on small and remote islands and archipelagos, where in some cases, owls are darker than the continental ones,” says Romano, of the University of Milan.
The researchers wondered if there was something special about these smaller, more isolated islands that was causing a color pattern reversal in the owls: sulfur. Many of the remote islands are volcanic in origin, with volcanoes loading the air and soil with sulfur dioxide. Sulfur also has a crucial role in the development of some melanin pigments. For instance, pheomelanin — which is biochemically built using sulfur compounds — imparts a reddish hue in vertebrate soft tissues, while eumelanin, which creates blacks and dark browns, doesn’t rely on sulfur.
Some studies have linked sulfur-rich diets or artificial sulfur sources like pollution to plumage and fur color, Romano says. So the team hypothesized that a volcanic environment full of sulfur might encourage the owls to produce more pheomelanin, making their plumage darker.
The researchers examined the preserved, feather-covered skins of more than 2,000 barn owl museum specimens from dozens of island groups. They scored the relative redness of the owls’ belly plumage, finding an average color for each geographic location. On islands with sulfur-rich volcanic soils or recently active volcanoes — such as Sulawesi in Indonesia or the Canary Islands — the owls had darker, redder plumage than those on nonvolcanic islands such as Tasmania, the team found.
The influence of volcanic sulfur on barn owl colors explains less than 10 percent of the color variation, the researchers estimate. Other inputs like genetics play a major role. For instance, one gene called MC1R is responsible for as much as 70 percent of the color variation, says Thomas Kvalnes, an ecoevolutionary biologist at the Norwegian Institute for Nature Research in Trondheim who was not involved with this study.
“Still there is variation left to explain, both within and between populations,” Kvalnes says. “This is where different environmental factors need to be taken into account.” It’s possible that sulfur-driven colors provide benefits to the owls, Romano says. Volcanic islands are often thick with vegetation supported by dark, fertile soil. Darker feathers might help the predatory owls disappear into their forest surroundings. The owls might also avoid the toxic effect of high sulfur exposure by shuttling a glut of sulfur into making more pheomelanin. Melanin has been previously tied to detoxifying pollutants in sea snakes, for instance (SN: 8/14/17).
Among birds, the connection between plumage color and volcanic sulfur might not be limited to just barn owls. Multiple bird species in Iceland, for example, are getting a pheomelanin boost from environmental sulfur, another group reported February 25 in the Journal of Ornithology. But some of these are migratory birds, Kvalnes points out, which dilutes the link between the local setting and the level of pigmentation.
It’s also possible the volcanic sulfur–pheomelanin relationship is even more widespread in vertebrates. “Studies on different species are highly needed to confirm whether this pattern is general,” Romano says. “Theoretically, however, the same process should apply to at least other birds and mammals.”
Romano is also interested in investigating how the sulfur is moving from the environment into the plumage pigmentation. Is it through the diet? The water? Maybe the air? “We know nothing about how sulfur reaches the soft tissues of this top predator,” he says.
A neutron star pileup may have emitted two different kinds of cosmic signals: ripples in spacetime known as gravitational waves and a brief blip of energy called a fast radio burst.
One of the three detectors that make up the gravitational wave observatory LIGO picked up a signal from a cosmic collision on April 25, 2019. About 2.5 hours later, a fast radio burst detector picked up a signal from the same region of sky, researchers report March 27 in Nature Astronomy. If strengthened by further observations, the finding could bolster the theory that mysterious fast radio bursts have multiple origins — and neutron star mergers are one of them.
“We’re 99.5 percent sure” the two signals came from the same event, says astrophysicist Alexandra Moroianu, who spotted the merger and its aftermath while at the University of Western Australia in Perth. “We want to be 99.999 percent sure.”
Unfortunately, LIGO’s two other detectors didn’t catch the signal, so it’s impossible to precisely triangulate its location. “Even though it’s not a concrete, bang-on observation for something that’s been theorized for a decade, it’s the first evidence we’ve got,” Moroianu says. “If this is true … it’s going to be a big boom in fast radio burst science.”
Mysterious radio bursts Astronomers have spotted more than 600 fast radio bursts, or FRBs, since 2007. Despite their frequency, the causes remain a mystery. One leading candidate is a highly magnetized neutron star called a magnetar, which could be left behind after a massive star explodes (SN: 6/4/20). But some FRBs appear to repeat, while others are apparent one-off events, suggesting that there’s more than one way to produce them (SN: 2/7/20).
Theorists have wondered if a collision between two neutron stars could spark a singular FRB, before the wreckage from the collision produces a black hole. Such a smashup should emit gravitational waves, too (SN: 10/16/17).
Moroianu and colleagues searched archived data from LIGO and the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, a fast radio burst detector in British Columbia, to see if any of their signals lined up. The team found one candidate pairing: GW190425 and FRB20190425A. Even though the gravitational wave was picked up only by the LIGO detector in Livingston, La., the team spotted other suggestive signs that the signals were related. The FRB and the gravitational waves came from the same distance, about 370 million light-years from Earth. The gravitational waves were from the only neutron star merger LIGO spotted in that observing run, and the FRB was particularly bright. There may even have been a burst of gamma rays at the same time, according to satellite data — another aftereffect of a neutron star merger.
“Everything points at this being a very interesting combination of signals,” Moroianu says. She says it’s like watching a crime drama on TV: “You have so much evidence that anyone watching the TV show would be like, ‘Oh, I think he did it.’ But it’s not enough to convince the court.”
Neutron star secrets Despite the uncertainty, the finding has exciting implications, says astrophysicist Alessandra Corsi of Texas Tech University in Lubbock. One is the possibility that two neutron stars could merge into a single, extra-massive neutron star without immediately collapsing into a black hole. “There’s this fuzzy dividing line between what’s a neutron star and what’s a black hole,” says Corsi, who was not involved in the new work.
In 2013, astrophysicist Bing Zhang of the University of Nevada, Las Vegas suggested that a neutron star smashup could create an extra-massive neutron star that wobbles on the edge of stability for a few hours before collapsing into a black hole. In that case, the resulting FRB would be delayed — just like in the 2019 case.
The most massive neutron star yet observed is about 2.35 times the mass of the sun, but theorists think they could grow to be around three times the mass of the sun without collapsing (SN: 7/22/22). The neutron star that could have resulted from the collision in 2019 would have been 3.4 solar masses, Moroianu and colleagues calculate.
“Something like this, especially if it’s confirmed with more observations, it would definitely tell us something about how neutron matter behaves,” Corsi says. “The nice thing about this is we have hopes of testing this in the future.”
The next LIGO run is expected to start in May. Corsi is optimistic that more coincidences between gravitational waves and FRBs will show up, now that researchers know to look for them. “There should be a bright future ahead of us,” she says.
A rocky planet that circles a small star nearly 40 light-years from Earth is hot and has little or no atmosphere, a new study suggests. The finding raises questions about the possibility of atmospheres on the other orbs in the planetary system.
At the center of the system is the red dwarf star dubbed TRAPPIST-1; it hosts seven known planets with masses ranging from 0.3 to 1.4 times Earth’s, a few of which could hold liquid water (SN: 2/22/17; 3/19/18). The largest, TRAPPIST-1b, is the closest to its parent star and receives about four times the radiation Earth receives from the sun, says Thomas Greene, an astrobiologist at NASA’s Ames Research Center at Moffett Field, Calif. Like all other planets in the system, TRAPPIST-1b is tidally locked, meaning that one side of the planet always faces the star, and one side looks away. Calculations suggest that if the stellar energy falling on TRAPPIST-1b were distributed around the planet — by an atmosphere, for example — and then reradiated equally in all directions, the planet’s surface temperature would be around 120° Celsius.
But the dayside temperature of the planet is actually around 230° C, Greene and colleagues report online March 27 in Nature. That, in turn, suggests that there’s little or no atmosphere to carry heat from the perpetually sunlit side of the planet to the dark side, the team argues.
To take TRAPPIST-1b’s temperature, Greene and his colleagues used the James Webb Space Telescope to observe the planet in a narrow band of infrared wavelengths five times in 2022. Because the observations were made just before and after the planet dodged behind its parent star, astronomers could see the fully lit face of the planet, Greene says.
The team’s results are “the first ‘deep dive’ look at this planet,” says Knicole Colon, an astrophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Md, who was not involved with the study. “With every observation, we expect to learn something new,” she adds.
Astronomers have long suggested that planets around red dwarf stars might not be able to hold onto their atmospheres, largely because such stars’ frequent and high-energy flares would blast away any gaseous shroud they might have during their early years (SN: 12/20/22). Yet there are some scenarios in which such flares could heat up a planet’s surface and drive volcanism that, in turn, yields gases that could help form a new atmosphere.
“To be totally sure that this planet has no atmosphere, we need many more measurements,” says Michaël Gillon, an astrophysicist at the University of Liège in Belgium who was not part of the new study. It’s possible that when observed at a wider variety of wavelengths and from other angles, the planet could show signs of a gaseous shroud and thus possibly hints of volcanism.
Either way, says Laura Kriedberg, an astronomer at the Max Planck Institute for Astronomy in Heidelberg, Germany, who also did not participate in the study, the new result “definitely motivates detailed study of the cooler planets in the system, to see if the same is true of them.”
Protecting the anonymity of publicly available genetic data, including DNA donated to research projects, may be impossible.
About 60 percent of people of European descent who search genetic genealogy databases will find a match with a relative who is a third cousin or closer, a new study finds. The result suggests that with a database of about 3 million people, police or anyone else with access to DNA data can figure out the identity of virtually any American of European descent, Yaniv Erlich and colleagues report online October 11 in Science. Erlich, the chief science officer of the consumer genetic testing company MyHeritage, and colleagues examined his company’s database and that of the public genealogy site GEDMatch, each containing data from about 1.2 million people. Using DNA matches to relatives, along with family tree information and some basic demographic data, scientists estimate that they could narrow the identity of an anonymous DNA owner to just one or two people.
Recent cases identifying suspects in violent crimes through DNA searches of GEDMatch, such as the Golden State Killer case (SN Online: 4/29/18), have raised privacy concerns (SN Online: 6/7/18). And the same process used to find rape and murder suspects can also identify people who have donated anonymous DNA for genetic and medical research studies, the scientists say.
Genetic data used in research is stripped of information like names, ages and addresses, and can’t be used to identify individuals, government officials have said. But “that’s clearly untrue,” as Erlich and colleagues have demonstrated, says Rori Rohlfs, a statistical geneticist at San Francisco State University, who was not involved in the study.
Using genetic genealogy techniques that mirror searches for the Golden State Killer and suspects in at least 15 other criminal cases, Erlich’s team identified a woman who participated anonymously in the 1000 Genomes project. That project cataloged genetic variants in about 2,500 people from around the world. Erlich’s team pulled the woman’s anonymous data from the publicly available 1000 Genomes database. The researchers then created a DNA profile similar to the ones generated by consumer genetic testing companies such as 23andMe and AncestryDNA (SN: 6/23/18, p.14) and uploaded that profile to GEDMatch.
A search turned up matches with two distant cousins, one from North Dakota and one from Wyoming. The cousins also shared DNA indicating that they had a common set of ancestors four to six generations ago. Building on some family tree information already collected by those cousins, researchers identified the ancestral couple and filled in hundreds of their descendants, looking for a woman who matched the age and other publicly available demographic data of the 1000 Genomes participant.
It took a day to find the right person.
That example suggests scientists that need to reconsider whether they can guarantee research participants anonymity if genetic data are publicly shared, Rohlfs says.
In reality, though, identifying a person from a DNA match with a distant relative is much harder than it appears, and requires a lot of expertise and gumshoe work, Ellen Greytak says. She is the director of bioinformatics at Parabon NanoLabs, a company in Reston, Va., that has helped close at least a dozen criminal cases since May using genetic genealogy searches. “The gulf between a match and identification is absolutely massive,” she says.
The company has also found that people of European descent often have DNA matches to relatives in GEDMatch. But tracking down a single suspect from those matches is often confounded by intermarriages, adoptions, aliases, cases of misidentified or unknown parentage and other factors, says CeCe Moore, a genealogist who spearheads Parabon’s genetic genealogy service.
“The study demonstrates the power of genetic genealogy in a theoretical way,” Moore says, “but doesn’t fully capture the challenges of the work in practice.” For instance, Erlich and colleagues already had some family tree information from the 1000 Genome woman’s relatives, “so they had a significant head start.”
Erlich’s example might be an oversimplification, Rohlfs says. The researchers made rough estimates and assumptions that are not perfect, but the conclusion is solid, she says. “Their work is approximate, but totally reasonable.” And that conclusion that almost anyone can be identified from DNA should spark public discussion about how DNA data should be used for law enforcement and research, she says.
Most U.S. government attempts to quantify the costs and benefits of protecting the country’s bodies of water are likely undervaluing healthy lakes and rivers, researchers argue in a new study. That’s because some clean water benefits get left out of the analyses, sometimes because these benefits are difficult to pin numbers on. As a result, the apparent value of many environmental regulations is probably discounted.
The study, published online October 8 in the Proceedings of the National Academy of Sciences, surveyed 20 government reports analyzing the economic impacts of U.S. water pollution laws. Most of these laws have been enacted since 2000, when cost-benefit analyses became a requirement. Analysis of a measure for restricting river pollution, for example, might find that it increases costs for factories using that river for wastewater disposal, but boosts tourism revenues by drawing more kayakers and swimmers. Only two studies out of 20 showed the economic benefits of these laws exceeding the costs. That’s uncommon among analyses of environmental regulations, says study coauthor David Keiser, an environmental economist at Iowa State University in Ames. Usually, the benefits exceed the costs.
So why does water pollution regulation seem, on paper at least, like such a losing proposition?
Keiser has an explanation: Summing up the monetary benefits of environmental policies is really hard. Many of these benefits are intangible and don’t have clear market values. So deciding which benefits to count, and how to count them, can make a big difference in the results. Many analyses assume water will be filtered for drinking, Keiser says, so they don’t count the human health benefits of clean lakes and rivers (SN: 8/18/18, p. 14). That’s different from air pollution cost-benefit studies, which generally do include the health benefits of cleaner air by factoring in data tracking things like doctor’s visits or drug prescriptions. That could explain why Clean Air Act rules tend to get more favorable reviews, Keiser says — human health accounts for about 95 percent of the measured benefits of air quality regulations.
“You can avoid a lake with heavy, thick, toxic algal blooms,” Keiser says. “If you walk outside and have very polluted air, it’s harder to avoid.”
But even if people can avoid an algae-choked lake, they still pay a price for that pollution, says environmental scientist Thomas Bridgeman, director of the Lake Erie Center at the University of Toledo in Ohio. Communities that pull drinking water from a lake filled with toxic blooms of algae or cyanobacteria spend more to make the water safe to drink. Bridgeman’s seen it firsthand: In 2014, Lake Erie’s cyanobacteria blooms from phosphorus runoff shut down Toledo’s water supply for two days and forced the city to spend $500 million on water treatment upgrades.
Most of the studies surveyed by Keiser and his team were missing other kinds of benefits, too. The reports usually left out the value of eliminating certain toxic and nonconventional pollutants — molecules such as bisphenol A, or BPA, and perfluorooctanoic acid, or PFOA (SN: 10/3/15, p. 12). In high quantities, these compounds, which are used to make some plastics and nonstick coatings, can cause harm to humans and wildlife. Many studies also didn’t include discussion of how the quality of surface waters can affect groundwater, which is a major source of drinking water for many people.
A lack of data on water quality may also limit studies, Keiser’s team suggests. While there’s a national database tracking daily local air pollution levels, the data from various water quality monitoring programs aren’t centralized. That makes gathering and evaluating trends in water quality harder.
Plus, there are the intangibles — the value of aquatic species that are essential to the food chain, for example. “Some things are just inherently difficult to put a dollar [value] on,” says Robin Craig, an environmental law professor at the University of Utah in Salt Lake City. “What is it worth to have a healthy native ecosystem?… That’s where it can get very subjective very fast.”
That subjectivity can allow agencies to analyze policies in ways that suit their own political agendas, says Matthew Kotchen, an environmental economist at Yale University. An example: the wildly different assessments by the Obama and Trump administrations of the value gained from the 2015 Clean Water Rule, also known as the Waters of the United States rule.
The rule, passed under President Barack Obama, clarified the definition of waters protected under the 1972 Clean Water Act to include tributaries and wetlands connected to larger bodies of water. The Environmental Protection Agency estimated in 2015 that the rule would result in yearly economic benefits ranging from $300 million to $600 million, edging out the predicted annual costs of $200 million to $500 million. But in 2017, Trump’s EPA reanalyzed the rule and proposed rolling it back, saying that the agency had now calculated just $30 million to $70 million in annual benefits.
The difference in the conclusions came down to the consideration of wetlands: The 2015 analysis found that protecting wetlands, such as marshes and bogs that purify water, tallied up to $500 million in annual benefits. The Trump administration’s EPA, however, left wetlands out of the calculation entirely, says Kotchen, who analyzed the policy swing in Science in 2017.
Currently, the rule has gone into effect in 26 states, but is still tied up in legal challenges.
It’s an example of how methodology — and what counts as a benefit — can have a huge impact on the apparent value of environmental policies and laws.
The squishiness in analyzing environmental benefits underlies many of the Trump administration’s proposed rollbacks of Obama-era environmental legislation, not just ones about water pollution, Kotchen says. There are guidelines for how such cost-benefit analyses should be carried out, he says, but there’s still room for researchers or government agencies to choose what to include or exclude.
In June, the EPA, then under the leadership of Scott Pruitt, proposed revising the way the agency does cost-benefit analyses to no longer include so-called indirect benefits. For example, in evaluating policies to reduce carbon dioxide emissions, the agency would ignore the fact that those measures also reduce other harmful air pollutants. The move would, overall, make environmental policies look less beneficial.
These sharp contrasts in how presidential administrations approach environmental impact studies are not unprecedented, says Craig, the environmental law professor. “Pretty much every time we change presidents, the priorities for how to weigh those different elements change.”