The Puerto Rican government has officially updated its tally of lives lost to Hurricane Maria to an estimated 2,975. That number, reported August 28 in a government-commissioned study by George Washington University in Washington D.C., dwarfs the island’s previous count of 64, which officials later acknowledged was far too low.
The study covers September 2017 through February 2018 — two months longer than other recent estimates for the post-hurricane death toll (SN Online: 8/2/18). An absence of clear guidelines for how to certify deaths during a disaster, the researchers found, meant many death certificates didn’t reflect the role of the Category 5 storm, which hit the island on September 20, 2017. Based on mortality data including death certificates, the new 2,975 estimate falls between two other recent counts. One study in May estimated 4,645 deaths from the hurricane through December 2017 by surveying nearly 3,300 randomly selected households in January and February (SN Online: 5/29/18). Another study in August counted 1,139 excess deaths during the same period, by analyzing and comparing monthly death counts from January 2010 through December 2017.
In a report to Congress, a draft of which was published in July, Puerto Rican officials unofficially acknowledged that the death toll was likely far higher than 64, based on its counting roughly 1,427 more deaths in the four months after the storm than in the same period in the previous four years.
While different methodologies have resulted in different death estimates, the new report “highlights that the humanitarian crisis in Puerto Rico continued until February 2018,” says Alexis Santos, a demographer at Penn State University who was not involved in the new report but was a coauthor of the August study. “All we can do is try to help those still suffering in Puerto Rico.”
A new artificial intelligence is turning its big brain to mapping earthquake aftershocks.
Scientists trained an artificial neural network to study the spatial relationships between more than 130,000 main earthquakes and their aftershocks. In tests, the AI was much better at predicting the locations of aftershocks than traditional methods that many seismologists use, the team reports in the Aug. 30 Nature.
Although it’s not possible to predict where and when an earthquake will happen, seismologists do know a few things about aftershocks. “We’ve known for a long time that they will cluster spatially and decay over time,” says geophysicist Susan Hough of the U.S. Geological Survey in Pasadena, Calif., who was not an author on the new study. Then, in 1992, a series of temblors prompted a flurry of interest in trying to map out where exactly an aftershock might occur, based on how a mainshock might shift stresses on other faults. First, a magnitude 7.3 earthquake shook the Southern California town of Landers and other nearby desert communities. Three hours later, a magnitude 6.5 aftershock struck the more populous area of Big Bear, about 35 kilometers away. The next day, a magnitude 5.7 aftershock struck near Yucca Mountain, Nev., nearly 300 kilometers away.
“After 1992, people were looking to understand [aftershock] patterns in more detail,” Hough says. Researchers began trying to distill the complicated stress change patterns using different criteria. The most used criterion, the “Coulomb failure stress change,” depends on fault orientations.
But fault orientations in the subsurface can be as complicated as a three-dimensional crazy quilt, and stresses can push on the faults from many different directions at once. Imagine a book sitting on a table: Shear stress pushes the book sideways, and might cause it to slide to the left or right. Normal stress pushes downward on the book, perpendicular to the table, so that it wouldn’t budge. Such a thorny computational problem may be tailor-made for a neural network, Hough says. Earthquake scientist Phoebe DeVries of Harvard University and colleagues, including a Cambridge, Mass.–based team from Google AI, fed data on more than 130,000 mainshock-aftershock pairs into an AI. Those data included not only locations and magnitudes, but also different measures of changes in stress on the faults from the quakes. The AI learned from the data to determine how likely an aftershock was to occur in a given place, and then the team tested how well the system could actually pinpoint aftershock locations using data from another 30,000 mainshock-aftershock pairs.
The artificial intelligence system consistently predicted aftershock locations much better than the Coulomb failure criterion, the researchers found. That’s because the AI’s results were strongly correlated with other measures of stress change, such as the maximum amount of change in shear stress on a fault, the scientists say.
“It’s a cool study and might pave the way for future work to improve forecasting,” Hough says. But the study focuses just on static stresses, which are permanent shifts in stress due to a quake. Aftershocks may also be triggered by a more ephemeral source of stress known as dynamic stress, produced by a quake’s rumbling through the ground, she says.
Another question is whether a forecast system that used such an AI could leap into action quickly enough after a quake for its aftershock predictions to be helpful. The predictions in the new study benefited from a lot of information about which faults slipped and by how much. In the immediate aftermath of a big quake, such data wouldn’t be available for at least a day.
Using a neural network to study the aftershock problem “is a really nice, efficient approach,” says seismologist Lucy Jones of Caltech and the founder of the Dr. Lucy Jones Center for Science and Society, based in Los Angeles (SN: 3/31/18, p. 26).
But she agrees with Hough that, to help with risk management, the system would need to be able to respond more rapidly. The rule of thumb is that “whatever number of aftershocks you have on the first day, you get half of that on the second day, and so on,” says Jones, who was not involved in the new study. “A week after the earthquake, the majority of aftershocks have already happened.”
Power systems transcended kinship in medieval Europe. A burial site in southern Germany contains members of a powerful warrior family who journeyed widely to find recruits to join the household and support a post-Roman kingdom, a new study suggests.
Thirteen individuals interred at Niederstotzingen belonged to the Alemanni, a confederation of Germanic tribes that were conquered by and integrated into a neighboring kingdom of the Frankish people starting around 1,400 years ago, researchers say. Excavations in 1962 revealed the bodies, which the team estimates were buried from roughly 580 to 630, along with various weapons, armor, jewelry, bridle gear and the remains of three horses. DNA extracted from the German skeletons identified 11 as probably males, biomolecular archaeologist Niall O’Sullivan of Eurac Research’s Institute for Mummy Studies in Bolzano, Italy, and colleagues report online September 5 in Science Advances. Six skeletons displayed genetic ties to modern northern and eastern Europeans. All but one of those six were closely related, including a father and two of his sons. Chemical analyses of tooth enamel, which provides regional signals of early childhood diet, indicated that these individuals grew up near Niederstotzingen.
Artifacts from three foreign medieval European cultures lay in the graves of four local males. Weapons and other objects typical of the Franks accompanied one man — the previously mentioned father — who may have headed the power household, the researchers suspect. Another three individuals buried at the site were genetically unrelated to anyone else. Two possessed DNA like that of present-day Mediterranean people. All had spent their childhoods in other regions, tooth data suggest.
The new results support previous suggestions that, shortly after the Roman Empire’s fall in the fifth century (SN: 4/29/17, p. 18), the Frankish Empire maintained power throughout central Europe for several centuries by establishing mobile, warrior households that enforced obedience to the ruler.
A new hexagon has emerged high in the skies over Saturn’s north pole.
As spring turned to summer in the planet’s northern hemisphere, a six-sided vortex appeared in the stratosphere. Surprisingly, the polar polygon seems to mirror the famous hexagonal cyclone that swirls in the clouds hundreds of kilometers below, researchers report online September 3 in Nature Communications.
When NASA’s Cassini spacecraft arrived at Saturn in 2004 — during summer in the southern hemisphere — the probe spied a similar vortex in the stratosphere over the south pole, though that one was shaped more like a plain old circle. As summer gradually turned to autumn, that vortex vanished. Now, planetary scientist Leigh Fletcher at the University of Leicester in England and colleagues report that Cassini caught a new vortex growing in the north during the spacecraft’s final years. Relying on infrared maps of the atmosphere, the team found that from 2014 to 2017 a warm, swirling mass of air started developing over the north pole. That wasn’t surprising — but the six-sided shape came as a bit of a shock.
The shape suggests that the underlying hexagon somehow controls what happens in the stratosphere. These sorts of insights could help researchers understand how energy moves around in other planetary atmospheres.
Unfortunately, Cassini is no longer around — it dove into Saturn last year (SN: 9/2/17, p. 16). But Earth-based telescopes will keep an eye on the storm to see how it changes along with Saturn’s seasons.
Most U.S. government attempts to quantify the costs and benefits of protecting the country’s bodies of water are likely undervaluing healthy lakes and rivers, researchers argue in a new study. That’s because some clean water benefits get left out of the analyses, sometimes because these benefits are difficult to pin numbers on. As a result, the apparent value of many environmental regulations is probably discounted.
The study, published online October 8 in the Proceedings of the National Academy of Sciences, surveyed 20 government reports analyzing the economic impacts of U.S. water pollution laws. Most of these laws have been enacted since 2000, when cost-benefit analyses became a requirement. Analysis of a measure for restricting river pollution, for example, might find that it increases costs for factories using that river for wastewater disposal, but boosts tourism revenues by drawing more kayakers and swimmers. Only two studies out of 20 showed the economic benefits of these laws exceeding the costs. That’s uncommon among analyses of environmental regulations, says study coauthor David Keiser, an environmental economist at Iowa State University in Ames. Usually, the benefits exceed the costs.
So why does water pollution regulation seem, on paper at least, like such a losing proposition?
Keiser has an explanation: Summing up the monetary benefits of environmental policies is really hard. Many of these benefits are intangible and don’t have clear market values. So deciding which benefits to count, and how to count them, can make a big difference in the results. Many analyses assume water will be filtered for drinking, Keiser says, so they don’t count the human health benefits of clean lakes and rivers (SN: 8/18/18, p. 14). That’s different from air pollution cost-benefit studies, which generally do include the health benefits of cleaner air by factoring in data tracking things like doctor’s visits or drug prescriptions. That could explain why Clean Air Act rules tend to get more favorable reviews, Keiser says — human health accounts for about 95 percent of the measured benefits of air quality regulations.
“You can avoid a lake with heavy, thick, toxic algal blooms,” Keiser says. “If you walk outside and have very polluted air, it’s harder to avoid.”
But even if people can avoid an algae-choked lake, they still pay a price for that pollution, says environmental scientist Thomas Bridgeman, director of the Lake Erie Center at the University of Toledo in Ohio. Communities that pull drinking water from a lake filled with toxic blooms of algae or cyanobacteria spend more to make the water safe to drink. Bridgeman’s seen it firsthand: In 2014, Lake Erie’s cyanobacteria blooms from phosphorus runoff shut down Toledo’s water supply for two days and forced the city to spend $500 million on water treatment upgrades.
Most of the studies surveyed by Keiser and his team were missing other kinds of benefits, too. The reports usually left out the value of eliminating certain toxic and nonconventional pollutants — molecules such as bisphenol A, or BPA, and perfluorooctanoic acid, or PFOA (SN: 10/3/15, p. 12). In high quantities, these compounds, which are used to make some plastics and nonstick coatings, can cause harm to humans and wildlife. Many studies also didn’t include discussion of how the quality of surface waters can affect groundwater, which is a major source of drinking water for many people.
A lack of data on water quality may also limit studies, Keiser’s team suggests. While there’s a national database tracking daily local air pollution levels, the data from various water quality monitoring programs aren’t centralized. That makes gathering and evaluating trends in water quality harder.
Plus, there are the intangibles — the value of aquatic species that are essential to the food chain, for example. “Some things are just inherently difficult to put a dollar [value] on,” says Robin Craig, an environmental law professor at the University of Utah in Salt Lake City. “What is it worth to have a healthy native ecosystem?… That’s where it can get very subjective very fast.”
That subjectivity can allow agencies to analyze policies in ways that suit their own political agendas, says Matthew Kotchen, an environmental economist at Yale University. An example: the wildly different assessments by the Obama and Trump administrations of the value gained from the 2015 Clean Water Rule, also known as the Waters of the United States rule.
The rule, passed under President Barack Obama, clarified the definition of waters protected under the 1972 Clean Water Act to include tributaries and wetlands connected to larger bodies of water. The Environmental Protection Agency estimated in 2015 that the rule would result in yearly economic benefits ranging from $300 million to $600 million, edging out the predicted annual costs of $200 million to $500 million. But in 2017, Trump’s EPA reanalyzed the rule and proposed rolling it back, saying that the agency had now calculated just $30 million to $70 million in annual benefits.
The difference in the conclusions came down to the consideration of wetlands: The 2015 analysis found that protecting wetlands, such as marshes and bogs that purify water, tallied up to $500 million in annual benefits. The Trump administration’s EPA, however, left wetlands out of the calculation entirely, says Kotchen, who analyzed the policy swing in Science in 2017.
Currently, the rule has gone into effect in 26 states, but is still tied up in legal challenges.
It’s an example of how methodology — and what counts as a benefit — can have a huge impact on the apparent value of environmental policies and laws.
The squishiness in analyzing environmental benefits underlies many of the Trump administration’s proposed rollbacks of Obama-era environmental legislation, not just ones about water pollution, Kotchen says. There are guidelines for how such cost-benefit analyses should be carried out, he says, but there’s still room for researchers or government agencies to choose what to include or exclude.
In June, the EPA, then under the leadership of Scott Pruitt, proposed revising the way the agency does cost-benefit analyses to no longer include so-called indirect benefits. For example, in evaluating policies to reduce carbon dioxide emissions, the agency would ignore the fact that those measures also reduce other harmful air pollutants. The move would, overall, make environmental policies look less beneficial.
These sharp contrasts in how presidential administrations approach environmental impact studies are not unprecedented, says Craig, the environmental law professor. “Pretty much every time we change presidents, the priorities for how to weigh those different elements change.”
Flying forward is hard enough, but flying nowhere, just hovering, is so much harder. Most bats and birds can manage the feat for only a few frantic seconds.
Hovering means losing a useful aerodynamic shortcut, says aerospace engineer and biologist David Lentink of Stanford University. As a bat or bird flies forward, its body movement sends air flowing around the wings and providing some cheap lift. For animals on the scale of bats and birds, that’s a big help. Without that boost, “you’re going to have to move all the air over your wings by moving it with your wings,” he says. The energy per second you’re consuming to stay in place by flapping your wings back and forth like a hummingbird “is gigantic.” So how do vertebrates in search of nectar, for whom a lot of energy-sucking hovering is part of life, manage the job? For the first direct measurements of the wingbeat forces that make hovering possible, Lentink’s Ph.D. student Rivers Ingersoll spent three years creating a flight chamber with exquisitely responsive sensors in the floor and ceiling. As a bird or bat hovers inside, the sensors can measure — every 200th of a second — tremors even smaller than a nanometer caused by air from fluttering wings. Once the delicate techno-marvel of an instrument was perfected, the researchers packed it into 11 shipping cases and sent it more than 6,000 kilometers to the wilds of Costa Rica. “Very difficult,” Ingersoll acknowledges. The Las Cruces Research Station is great for field biology, but it’s nothing like a Stanford engineering lab. Every car turning into the station’s driveway set off the wingbeat sensors. And even the special thick-walled room that became the machine’s second home warmed up enough every day to give the instrument a fever. Babying the instrument as best he could, Ingersoll made direct measurements for 17 hovering species of hummingbirds and three bats, including Pallas’s long-tongued bats (Glossophaga soricina). “Their up-pointy noses made me think of rhino faces,” he says. Pallas’s bats specialize in nectar sipping much as hummingbirds do. Comparing wingbeats, bat vs. bird, revealed differences, though. Hummers coupled powerful downstrokes and recovery upstrokes that twist part of the wings almost backward. The twist supplied about a quarter of the energy it takes to keep a bird aloft, the researchers report in the September 26 Science Advances. The two kinds of nectar bats got a little more lift from the upstroke than did a bat that eats fruit instead of strenuously hovering for nectar. Yet even the specialist nectar bats relied mostly on downstrokes: powerful, deeply angled downstrokes of really big wings.
Those bat wings span proportionally more area than hummer wings. So the bats get about the same hovering power per gram of body weight that hummingbirds do. Supersizing can have its own kind of high-tech design elegance.
SAN DIEGO — A sleepless night can leave the brain spinning with anxiety the next day.
In healthy adults, overnight sleep deprivation triggered anxiety the next morning, along with altered brain activity patterns, scientists reported November 4 at the annual meeting of the Society for Neuroscience.
People with anxiety disorders often have trouble sleeping. The new results uncover the reverse effect — that poor sleep can induce anxiety. The study shows that “this is a two-way interaction,” says Clifford Saper, a sleep researcher at Harvard Medical School and Beth Israel Deaconess Medical Center in Boston who wasn’t involved in the study. “The sleep loss makes the anxiety worse, which in turn makes it harder to sleep.” Sleep researchers Eti Ben Simon and Matthew Walker, both of the University of California, Berkeley, studied the anxiety levels of 18 healthy people. Following either a night of sleep or a night of staying awake, these people took anxiety tests the next morning. After sleep deprivation, anxiety levels in these healthy people were 30 percent higher than when they had slept. On average, the anxiety scores reached levels seen in people with anxiety disorders, Ben Simon said November 5 in a news briefing.
What’s more, sleep-deprived people’s brain activity changed. In response to emotional videos, brain areas involved in emotions were more active, and the prefrontal cortex, an area that can put the brakes on anxiety, was less active, functional MRI scans showed.
The results suggest that poor sleep “is more than just a symptom” of anxiety, but in some cases, may be a cause, Ben Simon said.
The cabbage tree emperor moth has wings with tiny scales that absorb sound waves sent out by bats searching for food. That absorption reduces the echoes that bounce back to bats, allowing Bunaea alcinoe to avoid being so noticeable to the nocturnal predators, researchers report online November 12 in the Proceedings of the National Academy of Sciences.
“They have this stealth coating on their body surfaces which absorbs the sound,” says study coauthor Marc Holderied, a bioacoustician at the University of Bristol in England. “We now understand the mechanism behind it.”
Bats sense their surroundings using echolocation, sending out sound waves that bounce off objects and return as echoes picked up by the bats’ supersensitive ears (SN: 9/30/17, p. 22). These moths, without ears that might alert them to an approaching predator, have instead developed scales of a size, shape and thickness suited to absorbing ultrasonic sound frequencies used by bats, the researchers found. The team shot ultrasonic sound waves at a single, microscopic scale and observed it transferring sound wave energy into movement. The scientists then simulated the process with a 3-D computer model that showed the scale absorbing up to 50 percent of the energy from sound waves.
What’s more, it isn’t just wings that help such earless moths evade bats. Other moths in the same family as B. alcinoe also have sound-absorbing fur, the same researchers report online October 18 in the Journal of the Acoustical Society of America. Holderied and his colleagues studied the fluffy thoraxes of the Madagascan bullseye moth and the promethea silk moth, and found that the fur also absorbs sound waves through a different process called porous absorption. In lab tests, the furry-bellied moths absorbed as much as 85 percent of the sound waves encountered. Researchers suspect that the equally fluffy cabbage tree emperor moth also has this ability.
Other moths that have ears can hear bats coming, and can quickly swerve out of the way of their predators, dipping and diving in dizzying directions (SN: 5/26/18, p. 11). Some moths also have long tails on their wings that researchers suspect can be twirled to disrupt bats’ sound waves (SN: 3/21/15, p. 17). Still other moths produce toxins to fend off foes.
Having sound-absorbent fur and scales “might require a lot less energy in terms of protection from the moth’s side,” says Akito Kawahara, an evolutionary biologist at the Florida Museum of Natural History in Gainesville who was not involved with the study. “It’s a very different kind of passive defense system.”
Holderied and his colleagues hope next to study how multiple scales, locked together, respond to ultrasonic sound waves. The findings could one day help in developing better soundproofing technology for sound engineers and acousticians.
Screwworms, the first pest to be eliminated on a large scale by the use of the sterile male technique, have shown an alarming increase, according to U.S. and Mexican officials…. The screwworm fly lays its eggs in open wounds on cattle. The maggots live on the flesh of their host, causing damage and death, and economic losses of many millions of dollars. — Science News, November 23, 1968
Update Though eradicated in the United States in 1966, screwworms reemerged two years later, probably coming up from Mexico. Outbreaks in southern U.S. states in 1972 and in Florida in 2016 were both handled with the sterile male technique, considered one of the most successful approaches for pest control. Males are sterilized with radiation, then released into a population to breed with wild counterparts; no offspring result. The method has been used with other pests, such as mosquitoes, which were dropped by drones over Brazil this year as a test before the technology is used against outbreaks like the Zika virus.
A physicist, a gamer and two editors walk into a bar. No, this isn’t the setup for some joke. After work one night, a few Science News staffers tried out a new board game, Subatomic. This deck-building game combines chemistry and particle physics for an enjoyable — and educational — time.
Subatomic is simple to grasp: Players use quark and photon cards to build protons, neutrons and electrons. With those three particles, players then construct chemical elements to score points. Scientists are the wild cards: Joseph J. Thomson, Maria Goeppert-Mayer, Marie Curie and other Nobel laureates who discovered important things related to the atom provide special abilities or help thwart other players. The game doesn’t shy away from difficult or unfamiliar concepts. Many players might be unfamiliar with quarks, a group of elementary particles. But after a few rounds, it’s ingrained in your brain that, for example, two up quarks and one down quark create a proton. And Subatomic includes a handy booklet that explains in easy-to-understand terms the science behind the game. The physicist in our group vouched for the game’s accuracy but had one qualm: Subatomic claims that two photons, or particles of light, can create an electron. That’s theoretically possible, but scientists have yet to confirm it in the lab.
The mastermind behind Subatomic is John Coveyou, who has a master’s degree in energy, environmental and chemical engineering. As the founder and CEO of Genius Games , he has created six other games, including Ion ( SN: 5/30/15, p. 29 ) and Linkage ( SN: 12/27/14, p. 32 ). Next year, he’ll add a periodic table game to the list . Because Science News has reviewed several of his games, we decided to talk with Coveyou about where he gets his inspiration and how he includes real science in his products. The following discussion has been edited for length and clarity. SN: When did you get interested in science?
Coveyou: My mom was mentally and physically disabled, and my dad was in and out of prison and mental institutions. So early on, things were very different for me. I ended up leaving home when I was in high school, hopscotching around from 12 different homes throughout my junior and senior year. I almost dropped out, but I had a lot of teachers who were amazing mentors. I didn’t know what else to do, so I joined the army. While I was in Iraq, I had a bunch of science textbooks shipped to me, and I read them in my free time. They took me out of the environments I was in and became extremely therapeutic. A lot of the issues we face as a society can be worked on by the next generation having a command of the sciences. So I’m very passionate about teaching people the sciences and helping people find joy in them.
SN: Why did you start creating science games?
Coveyou: I was teaching chemistry at a community college, and I noticed that my students were really intimidated by the chemistry concepts before they even came into the classroom. They really struggled with a lot of the basic terminology. At the same time, I’ve been a board gamer pretty much my whole life. And it kind of hit me like, “Whoa, wait a second. What if I made some games that taught some of the concepts that I’m trying to teach my chemistry students?” So I just took a shot at it. The first couple of games were terrible. I didn’t really know what I was doing, but I kept at it.
SN: How do you test the games?
Coveyou: We first test with other gamers. Once we’re ready to get feedback from the general public, we go to middle school or high school students. Once we test a game with people face-to-face, we will send it across the world to about 100 to 200 different play testers, and those vary from your hard-core gamers to homeschool families to science teachers, who try it in the classroom.
SN: How do you incorporate real science into your games?
Coveyou: I pretty much always start with a science concept in mind and think about how can we create a game that best reflects the science that we want to communicate. For all of our upcoming games, we include a booklet about the science. That document is not created by Genius Games. We have about 20 to 30 Ph.D.s and doctors across the globe who write the content and edit each other. That’s been a real treat to actually show players how the game is accurate. We’ve had so many scientists and teachers who are just astonished that we created something like this that was accurate, but also fun to play.