After one last swing past Titan, the Cassini spacecraft is now plunging to its doom. At 3:04 p.m. EDT (12:04 p.m. PDT) on September 11, the spacecraft used a gravitational nudge from Saturn’s largest moon to set itself on a collision course with the giant planet’s atmosphere on September 15.
Cassini’s last close flyby of Titan on April 21 curved the spacecraft’s orbit to send it on a series of dives between Saturn and its rings. That approach took the probe to about 979 kilometers above the moon’s surface. This final flyby, which the mission team calls the “kiss goodbye,” is a more distant 119,049 kilometers — but that’s enough to slow the spacecraft down and send it into Saturn’s atmosphere, where it will ultimately melt and disintegrate.
“There is no coming out of it,” Cassini project manager Earl Maize said in a press conference on August 29. “That final kiss goodbye, I don’t want to get too romantic about it, but that really is our last flyby.”
Titan is home to lakes and rivers made of liquid hydrocarbons, which may be hospitable to life unlike that on Earth. Mission engineers decided to send Cassini to burn up in Saturn’s atmosphere in part to protect any potential life-forms from contamination if Cassini accidentally crashed into Titan.
Cassini has been orbiting Saturn since 2004 and has revolutionized our understanding of Saturn’s atmosphere, rings, moons, and of the outer solar system in general.
To the residents of Donora, Pa., a mill town in a crook of the Monongahela River, the daily haze from nearby zinc and steel plants was the price of keeping their families fed. But on October 27, 1948, the city awoke to an unusually sooty sky, even for Donora. The next day, the high school quarterbacks couldn’t see their teammates well enough to complete a single pass.
The town was engulfed in smog for five days, until a storm finally swept the pollution out of the valley. By then, more than one-third of the population had fallen ill and 20 people were dead. Another 50 perished in the following months.
After the Donora tragedy, the federal government began to clamp down on industries that release pollutants into the air. Environmental advocates in the coming decades fought for, and won, tighter regulations. As a result, combined emissions of six common air pollutants have dropped by about 70 percent nationwide since the 1970 passage of the Clean Air Act, which regulates U.S. emissions of hazardous air pollutants. In 35 major U.S. cities, the total number of days with unhealthy air has fallen by almost two-thirds just since 2000. “It’s one of the great success stories of public health,” says Joel Kaufman, a physician and epidemiologist at the University of Washington School of Public Health in Seattle.
Our bodies feel the difference. One study, reported in JAMA last year, followed 4,602 children in Southern California between 1993 and 2012 to see how lung health correlated with three common air pollutants. As levels of ozone, nitrogen dioxide and particulate matter fell over time, so did the number of children who reported a daily cough, persistent congestion and other symptoms of irritated lungs. At the start of the study, 48 percent of children with asthma had reported bronchitis symptoms in the previous year. In communities with the greatest drop in pollutants during the study period, bronchitis prevalence fell by as much as 30 percent in children with asthma. But the air pollution story isn’t over. Researchers from the Harvard T.H. Chan School of Public Health in Boston recently reported on links between air quality and mortality throughout the entire U.S. Medicare population (more than 60 million people who are age 65 and older or disabled). The analysis looked at levels of two common air pollutants and death rates from 2000 to 2012, while accounting for factors that might confound the results, such as race and socioeconomic status. The analysis, published in June in the New England Journal of Medicine, found that when pollutant levels rose (but remained at levels below national standards), so did death rates.
Even with vast improvements in air quality since the ’70s, people haven’t stopped dying from the air they breathe. An analysis published in 2013 from researchers at MIT estimated that about 200,000 premature deaths occur each year in the United States because of fine particulate air pollution. A study published in January in Environmental Health Perspectives reported that daily deaths over a decade in metropolitan Boston peaked on days when concentrations of three common air pollutants were at their highest, even though those levels would currently satisfy the U.S. Environmental Protection Agency.
“We’ve made these improvements in exposure,” Kaufman says, “but what more do we need to clean up?”
So despite a half-century of progress, airborne grime is still a menace — probably in ways the people of Donora never imagined. Researchers are now finding that more than the lungs are at risk, as dirty air may in fact be an accomplice to some of the greatest threats to public health, including diabetes, obesity and even dementia. Those studies are likely to inform the ongoing debate over antismog rules. The U.S. House of Representatives voted this summer to delay implementation of updated standards for the Clean Air Act.
Slow burn As it has for more than a century, air pollution in America largely arises from power plants, industries, vehicles and other sources of fuel burning. The pollution is generally a mixture of gases — such as carbon monoxide, sulfur dioxide and nitrogen oxides — and particulate matter, microscopic solids or droplets that can be inhaled into the lungs. The pollutant that has declined the least is ozone, a hard-to-control noxious gas formed when nitrogen oxides and volatile organic compounds react with sunlight. Ozone pollution tends to soar on hot, windless summer days as the sun blazes.
Particulates come from tail pipes and smokestacks, but also consist of tiny fragments shed from tires, roads and brake pads. Fine particulates (less than 2.5 micrometers wide, or about a quarter of the width of the smallest grain of pollen) are of greatest concern because they can penetrate deeply into the lungs to reach the body’s innermost nooks and crannies. A study in April in the journal ACS Nano demonstrated that fact. Fourteen healthy volunteers intermittently riding exercise bikes inhaled gold nanoparticles — stand-ins for particulates — and 15 minutes later, the nanoparticles were detected in the bloodstream and remained present in the body for as long as three months.
While events in Donora showed that air pollution can have immediate consequences, it took decades for researchers to realize that deaths from smog could be going undetected, lost in the background noise of mortality statistics. In 1993, Harvard University scientists published a study in the New England Journal of Medicine looking at mortality rates among adults in six U.S. cities. The researchers studied more than 8,000 people for 14 to 16 years. In areas with higher levels of sulfate particles in the air, a measure of pollution, mortality rates were higher. Dozens of similar studies have followed, including one published in 2003 that looked at death rates across 20 of the largest U.S. cities. That research found that the highest death rates occurred the day after particulate concentrations reached their highest levels, though the levels were subtle enough to go unnoticed at the time. Scientists now know that inhaling pollutants triggers a flurry of physiological coping mechanisms throughout the body. “Until 20 years ago, we thought that air pollution affected only the respiratory system,” says Petros Koutrakis, an environmental chemist who heads the EPA Harvard Center for Ambient Particle Health Effects. By 2004, the American Heart Association published a consensus statement in Circulation laying out “a strong case that air pollution increases the risk of cardiovascular disease,” the leading cause of U.S. deaths. More studies followed that statement, including one from Kaufman and colleagues in the New England Journal of Medicine in 2007. The researchers studied 65,893 women, looking for a link between exposure to fine particulates and death from heart attack or stroke, or even nonfatal heart attacks or the need for artery-clearing procedures. In the end, each increase of 10 micrograms of fine particulates per cubic meter of air increased the risk of any cardiovascular health event by 24 percent and the risk of dying from heart attack or stroke by 76 percent. In 2010, the American Heart Association updated its position: “The overall evidence is consistent with a causal relationship between [fine particulate] exposure and cardiovascular morbidity and mortality.” While the mechanism is still under study, research points to inflammation, heart rate variability and blood vessel damage.
Evidence keeps accumulating. A study by Koutrakis and colleagues, published in 2012 in Archives of Internal Medicine, found similar results. When particulate concentrations rose even to mild levels — those classified as a “moderate health concern for a very small number of people” by EPA standards — the risk of stroke rose by 34 percent within a day of exposure.
Pounds and pollution Lately, studies have moved from cardiovascular disease into more unexpected territory. And they’ve turned up compelling evidence that air quality may contribute to excess body weight. Frank Gilliland, an environmental epidemiologist at the University of Southern California in Los Angeles, became intrigued when laboratory studies suggested that certain pollutants in the environment might function as “obesogens,” contributing to weight gain by mimicking or disrupting the action of hormones, or having other effects. Still, he says, “I was very skeptical.”
Out of curiosity, he began to look for a link between childhood obesity and living close to a major roadway. His first study, published in 2010, examined over 3,000 children across California. Although the researchers found an association, they couldn’t rule out other explanations that would also lead back to cars. “Maybe the kids aren’t getting exercise because there’s a lot of traffic out,” he says. Newer findings are more convincing, including a 2014 study by Gilliland and colleagues. They studied body mass index among children exposed to traffic-related air pollution. Of course, as the children grew over the five-year study period, their BMIs increased from an average of 16.8 to 19.4 kilograms per square meter. But children exposed to the most air pollution, compared with those least exposed, had a 14 percent larger BMI increase, which meant an additional 0.4 kg/m2 increase in BMI by age 10. Adults, too, appear to be affected. Researchers from Harvard Medical School and elsewhere published a study in 2016 in the journal Obesity looking at whether adults living with constant exposure to traffic are more likely to be overweight. In particular, people who lived within 60 meters of a major road had a higher BMI, by 0.37 kg/m2, and more fat tissue than those who lived 440 meters from a busy road. The healthy range for an adult’s BMI is 18.5 to 25 kg/m2.
Studies in animals have started to offer hints why this might be the case. Last year in the FASEB Journal, Chinese researchers described an experiment in which one group of pregnant rats was raised in filtered air scrubbed of pollutants, while another breathed the usual Beijinghaze. Though they were fed the same diet, the animals living in Beijing air were heavier at the end of their pregnancies, as were their offspring that continued to breathe the dirty air eight weeks after birth. Among later autopsy findings: Rats exposed to pollution experienced higher levels of inflammation, which is thought to be a contributor to weight gain and metabolic disruption.
The relationship is probably subtle, and interwoven with genetics and lifestyle. UCLA researchers who followed a large group of African-American women over 16 years found no association between weight and exposure to particulates. For now, the connection between obesity and pollution is still a subject of investigation. But given that 11 million Americans live along major roadways, even a small effect could have widespread consequences.
Links to diabetes For many people, diabetes goes hand in hand with obesity. One of the earliest compelling studies to suggest a relationship between diabetes and air pollution was an animal experiment published in 2009 in Circulation from researchers at Ohio State University and other institutions. The test was relatively simple: Two groups of mice were fed a high-fat diet for 24 weeks. One lived in clean, filtered air; the other group was housed in enclosures polluted with air containing fine particulates, at concentrations still within EPA standards. The mice breathed the polluted air for six hours per day, five days a week for 128 days. Even though they ate the same food, the mice living in dirty air developed metabolic changes characteristic of insulin resistance while the other mice did not. Similarly, a 2013 study from EPA scientists found that mice exposed to ozone can develop glucose intolerance, a precursor for diabetes.
In July in Diabetes, Gilliland and colleagues published data not only finding links between air pollution and diabetes in children, but also insight into the body’s physiological response. In the study, 314 overweight or obese children in Los Angeles were followed for an average of three years. At the end of the study, children who lived in neighborhoods with the highest concentrations of nitrogen dioxide and particulates had experienced greater declines in insulin sensitivity and had signs of impaired pancreatic beta cells, which produce insulin. As for adults, a study this year in Environment International, conducted by researchers from eight institutions, tracked more than 45,000 African-American women across the United States. Those who were exposed to the highest concentrations of ozone were about 20 percent more likely to develop diabetes, even after adjusting for other possible explanations such as diet and exercise levels.
Brain drain One of the latest lines of research suggests that poisons in the air might accelerate aging in the brain. Studies have long documented the connection between the nose and brain function. For reasons not yet known, for instance, one of the early signs of Parkinson’s disease is a loss of the ability to distinguish smells.
During her graduate studies at Harvard, Jennifer Weuve, now an epidemiologist at the Boston University School of Public Health, wondered if airborne pollutants might be bad for the brain. “There was really intriguing data from animal studies,” she says, which showed that inhaled pollutants had toxic effects on nerve cells. In 2012, she published the first study to note a faster-than-normal cognitive decline among people exposed to higher levels of particulates, both those smaller than 2.5 micrometers and even larger ones that are thought to be less harmful. Her study, published in Archives of Internal Medicine, analyzed data from the Nurses’ Health Study Cognitive Cohort, which included almost 20,000 women ages 70 to 81, and used geographic information and air-monitoring data to estimate pollution exposure.
More recently, researchers from Sweden examined the relationship between pollution exposure and dementia, studying the records of people in northern Sweden participating in a long-term study of memory and aging. In 2016 in Environmental Health Perspectives, the researchers reported that people with the most exposure to air pollution were also the most likely to be diagnosed with Alzheimer’s or other forms of dementia. In all, more than a dozen human studies have examined pollution’s link to dementia. Last year in NeuroToxicology, Weuve and colleagues reviewed 18 human studies published as of late 2015, and concluded that as a whole, the evidence was “highly suggestive” and in need of more exploration. “What is it going to take for more people to take this seriously?” Weuve asks.
While the relationship is far from established, animal data may help clarify the results. One study published this year in Neurobiology of Aging, from researchers at the University of Southern California, examined brain changes in mice exposed to particulate air pollution at levels commonly found near freeways. After exposure to the pollution for five hours per day, three days a week for 10 weeks, the animals showed accelerated aging in the hippocampus, a region of the brain associated with memory. And a 2015 study of older women exposed to high levels of particulate matter, at levels common in the eastern half of the United States and in parts of California, showed a small decrease in the volume of white matter, the myelin-coated nerve cell projections called axons. Parkinson’s disease may also be linked to pollution. Danish researchers, with colleagues in the United States and Taiwan, published a study last year in Environmental Health Perspectives looking at people with and without Parkinson’s and their exposure to nitrogen dioxide, a marker for traffic-polluted air. The scientists identified 1,828 people in Denmark with Parkinson’s diagnosed between 1996 and 2009, and compared them with about the same number of randomly selected healthy people. Those exposed to the highest levels of air pollution had the greatest risk of developing the disease. The data, the researchers wrote, “raise concern given the increase in vulnerable aging populations.”
If science bears out the connection between pollution and brain health, or pollution and metabolism, environmental advocates and businesses may have even more reason to push for cleaner air. Researchers hope in the future to have more data on which pollutants cause the greatest harm, and why. In Donora, the site of one of the country’s biggest air pollution disasters, a sign at the Smog Museum now reads “Clean Air Started Here.” No one can yet say how clean is clean enough.
Saber-toothed kittens were the spitting image of their parents. Even as babies, the cats not only had the oversized canine teeth but also unusually powerful forelimbs, Katherine Long, a graduate student at California State Polytechnic University in Pomona, and colleagues report September 27 in PLOS ONE.
As adults, the ferocious felines used those strong forelimbs to secure wriggling prey before slashing a throat or belly (thereby avoiding breaking off a tooth in the struggle). Paleontologists have puzzled over whether saber-toothed cats such as Smilodon fatalis developed those robust limbs as they grew.
To compare the growth rate of Smilodon with that of similar-sized non‒saber-toothed cats that lived alongside it, Long and her team turned to fossils collected from the La Brea Tar Pits in Los Angeles. The ancient asphalt traps hold a wealth of species and specimens from juveniles to adults, dating to between 37,000 and 9,000 years ago.
The Smilodon bones, they found, did not show any evidence of an unusual growth spurt. Instead, the bones grew longer and slimmer as the kittens grew up, following the same developmental pattern as the other large cats. That suggests that when it comes to their mighty forelimbs, Smilodon kittens were just born that way.
In a pitch-black rainforest with fluttering moths and crawling centipedes, Christina Warinner dug up her first skeleton. Well, technically it was a full skeleton plus two headless ones, all seated and draped in ornate jewelry. To deter looters, she excavated through the night while one teammate held up a light and another killed as many bugs as possible.
As Warinner worked, unanswerable questions about the people whose skeletons she was excavating flew through her mind. “There’s only so much you can learn by looking with your own eyes at a skeleton,” she says. “I became increasingly interested in all the things that I could not see — all the stories that these skeletons had to tell that weren’t immediately accessible, but could be accessible through science.”
At age 21, Warinner cut her teeth on that incredibly complex sacrificial burial left behind by the Maya in a Belize rainforest. Today, at age 37, the molecular anthropologist scrapes at not-so-pearly whites to investigate similar questions, splitting her time between the University of Oklahoma in Norman and the Max Planck Institute for the Science of Human History in Jena, Germany. In 2014, she and colleagues reported a finding that generated enough buzz to renew interest in an archaeological resource many had written off decades ago: fossilized dental plaque, or calculus. Ancient DNA and proteins in the plaque belong to microbes that could spill the secrets of the humans they once inhabited — what the people ate, what ailed them, perhaps even what they did for a living.
Bacteria form plaque that mineralizes into calculus throughout a person’s life. “It’s the only part of your body that fossilizes while you’re still alive,” notes Warinner. “It’s also the last thing to decay.”
Though plaque is prolific in the archaeological record, most researchers viewed calculus as “the crap you scraped off your tooth in order to study it,” says Amanda Henry, an archaeologist at Leiden University in the Netherlands. With some exceptions, molecular biologists saw calculus as a shoddy source of ancient DNA.
But a few researchers, including Henry, had been looking at calculus for remnants of foods as potential clues to ancient diets. Inspired by some of Henry’s images of starch grains preserved in calculus, Warinner wondered if the plaque might yield dead bacterial structures, perhaps even bacteria’s genetic blueprints.
Her timing couldn’t have been better. Warinner began her graduate studies at Harvard in 2004, just after the sequencing of the human genome was completed and by the time she left in 2010, efforts to survey the human microbiome were in full swing. As a postdoc at the University of Zurich, Warinner decided to attempt to extract DNA from the underappreciated dental grime preserved on the teeth of four medieval skeletons from Germany. At first, the results were dismal. But she kept at it. “Tina has a very interested, curious and driven personality,” Henry notes. Warinner turned to a new instrument that could measure DNA concentrations in skimpy samples, a Qubit fluorometer. A surprising error message appeared: DNA too high. Dental calculus, it turned out, was chock-full of genetic material. “While people were struggling to pull out human DNA from the skeleton itself, there’s 100 to 1,000 times more DNA in the calculus,” Warinner says. “It was sitting there in almost every skeletal collection untouched, unanalyzed.” To help her interpret the data, Warinner mustered an army of collaborators from fields ranging from immunology to metagenomics. She and her colleagues found a slew of proteins and DNA snippets from bacteria, viruses and fungi, including dozens of oral pathogens, as well as the full genetic blueprint of an ancient strain of Tannerella forsythia, which still infects people’s gums today. In 2014, Warinner’s team revealed a detailed map of a miniature microbial world on the decaying teeth of those German skeletons in Nature Genetics.
Later in 2014, her group found the first direct protein-based evidence of milk consumption in the plaque of Bronze Age skeletons from 3000 B.C. That same study linked milk proteins preserved in the calculus of other ancient human skeletons to specific animals — providing a peek into long-ago lifestyles.
“The fact that you can tell the difference between, say, goat milk and cow milk, that’s kind of mind-blowing,” says Laura Weyrich, a microbiologist at the University of Adelaide in Australia, who also studies calculus. Since then, Warinner has found all sorts of odds and ends lurking on archaic chompers from poppy seeds to paint pigments. Warinner’s team is still looking at the origins of dairying and its microbial players, but she’s also branching out to the other end of the digestive spectrum. The researchers are looking at ancient DNA in paleofeces, which is exactly what it sounds like — desiccated or semifossilized poop. It doesn’t stay as fresh as plaque in the archaeological record. But she’s managed to find some sites with well-preserved samples. By examining the array of microbes that lived in the excrement and plaque of past humans and their relatives, Warinner hopes to characterize how our microbial communities have changed through time — and how they’ve changed us.
The research has implications for understanding chronic, complex human diseases over time. Warinner’s ancient DNA work “opens up a window on past health,” says Clark Larsen, an anthropologist at Ohio State University.
It’s all part of what Warinner calls “the archaeology of the unseen.”
Editor’s note: This story was corrected on October 4, 2017, to note that the 2014 report on milk consumption was based on protein evidence, not DNA.
A founding father of behavioral economics — a research school that has popularized the practice of “nudging” people into making decisions that authorities deem to be in their best interests — has won the 2017 Nobel Memorial Prize in Economic Sciences.
Richard Thaler, of the University of Chicago Booth School of Business, received the award October 9 for being the leader of a discipline that has championed the idea of humans not as purely rational and selfish — as long posited by economists. Instead, he argues, we are driven by simple, often emotionally fueled assumptions that can lead us astray.
“Richard Thaler has pioneered the analysis of ways in which human decisions systematically deviate from traditional economic models,” says cognitive scientist Peter Gӓrdenfors of Lund University, Sweden, a member of the Economic Sciences Prize Committee.
Thaler argues that, even if people try to make good economic choices, our thinking abilities are limited. In dealing with personal finances, for instance, he finds that most people mentally earmark money into different accounts, say for housing, food, vacations and entertainment. That can lead to questionable decisions, such as saving for a vacation in a low-interest savings account while buying household goods with a high-interest credit card.
At an October 9 news conference at the University of Chicago, Thaler referenced mental accounting in describing what he would do with the roughly $1.1 million award. “Every time I spend any money on something fun, I’ll say it came from the Nobel Prize.”
Thaler’s research has also focused on how judgments about fairness, such as sudden jumps in the prices of consumer items, affect people’s willingness to buy those items. A third area of his research finds that people’s short-term desires often override long-term plans. A classic example consists of putting off saving for retirement until later in life.
That research in particular inspired his 2008 book Nudge: Improving Decisions about Health, Wealth and Happiness, coauthored by Cass Sunstein, now at Harvard Law School. Nudging, also known as libertarian paternalism, is a way for public and private institutions to prod people to make certain decisions (SN: 3/18/17, p. 18). For instance, employees more often start saving for retirement early in their careers when offered savings plans that they must opt out of. Many governments, including those of the United Kingdom and the United States, have funded teams of behavioral economists, called nudge units, to develop ways to nudge people to, say, apply for government benefits or comply with tax laws. A total of 75 nudge units now exist worldwide, Thaler said at the news conference.
Nudging has its roots in a line of research, dubbed heuristics and biases, launched in the 1970s by two psychologists — 2002 economics Nobel laureate Daniel Kahneman of Princeton University and the late Amos Tversky of Stanford University. Investigators in heuristics and biases contend that people can’t help but make many types of systematic thinking errors, such as being overconfident in their decisions.
Thaler, like Kahneman, views the mind as consisting of one system for making rapid, intuitive decisions that are often misleading and a second system for deliberating slowly and considering as much relevant information as possible.
Despite the influence of Thaler’s ideas on research and social policy, they are controversial among decision researchers (SN: 6/4/11, p. 26). Some argue that nudging overlooks the power of simple rules-of-thumb for making decisions that people can learn to wield on their own.
“I don’t think I’ve changed everybody’s minds,” Thaler said. “But many young economists embrace behavioral economics.”
A new stretchy prosthetic could reduce the number of surgeries that children with leaking heart valves must undergo.
The device, a horseshoe-shaped implant that wraps around the base of a heart valve to keep it from leaking, is described online October 10 in Nature Biomedical Engineering. In adults, a rigid ring is used, but it can’t be implanted in children because it would constrict their natural heart growth. Instead, pediatric surgeons cinch their patients’ heart valves with stitches — which can break or pull through tissue as a child grows, requiring further surgery to repair. It’s not uncommon for a child to require two to four of these follow-up procedures, says study coauthor Eric Feins, a cardiac surgeon at Boston Children’s Hospital and Harvard Medical School. Doctors in the United States perform over 1,000 pediatric heart valve repair surgeries each year.
“It’s quite invasive to do surgeries on a beating heart,” says coauthor Jeff Karp, a biomedical engineer at Brigham and Women’s Hospital in Boston. To decrease the need for these open-heart follow-up procedures, Karp and colleagues invented a new type of implant that stretches as its wearer grows. It’s made of a biodegradable polyester core covered by a mesh tube. The material of this outer sleeve is interwoven like a Chinese finger trap, so when heart valve tissue grows and tugs on the tube’s ends, it stretches. Over time, the core dissolves, and the growing tissue can pull the sleeve into a longer, thinner shape. By tweaking an implant’s initial length and width, the core’s chemical makeup and the tightness of the sleeve’s braid, the researchers can fine-tune the stretchiness. This could allow developers to tailor each device to accommodate an individual patient’s expected growth rate.
“This is a brand new idea. I’ve never seen anything like it before,” says Gus Vlahakes, a cardiac surgeon at Massachusetts General Hospital in Boston, who was not involved in the study. “It’s a great concept.” Karp and colleagues tested prototypes of the heart implant by inserting them into growing piglets. Twenty weeks after surgery, the implants had expanded as expected. The biomedical device company CryoLife, Inc. is now using the researchers’ design to build ring implants for further studies in lab animals, Karp says. “Clinical trials could start within a few years, if all goes well,” he says.
This growth-accommodating design may also be repurposed to make other kinds of pediatric implants. For instance, stretchable devices could supplant the stiff plates and staples that surgeons currently use to treat bone growth disorders. The researchers’ new implant model is “very generalizable,” Vlahakes says.
On Jupiter, lightning jerks and jolts a lot like it does on Earth.
Jovian lightning emits radio wave pulses that are typically separated by about one millisecond, researchers report May 23 in Nature Communications. The energetic prestissimo, the scientists say, is a sign that the gas giant’s lightning propagates in pulses, at a pace comparable to that of the bolts that cavort through our own planet’s thunderclouds. The similarities between the two world’s electrical phenomena could have implications for the search for alien life. Arcs of lightning on both worlds appear to move somewhat like a winded hiker going up a mountain, pausing after each step to catch their breath, says Ivana Kolmašová, an atmospheric physicist at the Czech Academy of Sciences in Prague. “One step, another step, then another step … and so on.”
Here on Earth, lightning forms as turbulent winds within thunderclouds cause many ice crystals and water droplets to rub together, become charged and then move to opposite sides of the clouds, progressively generating static electrical charges. When the charges grow big enough to overcome the air’s ability to insulate them, electrons are released — the lightning takes its first step. From there, the surging electrons will repeatedly ionize the air and rush into it, lurching the bolt forward at an average of hundreds of thousands of meters per second.
Scientists have suggested that superbolts observed in Jovian clouds might also form by collisions between ice crystals and water droplets (SN: 8/5/20). But no one knew whether the alien bolts extended and branched in increments, as they do on Earth, or if they took some other form.
For the new study, Kolmašová and her colleagues used five years of radio wave data collected by NASA’s Juno spacecraft (SN: 12/15/22). Analyzing hundreds of thousands of radio wave snapshots, the team found radio wave emissions from Jovian lightning appeared to pulse at a rate comparable to that of Earth’s intracloud lightning — arcs of electricity that never strike ground.
If bolts extend through Jupiter’s water clouds at a similar velocity as they do in Earth’s clouds, then Jovian lightning might branch and extend in steps that are hundreds to thousands of meters long. That’s comparable in length to the jolted strides of Earth’s intracloud lightning, the researchers say.
“That’s a perfectly reasonable explanation,” says atmospheric physicist Richard Sonnenfeld of the New Mexico Institute of Mining and Technology in Socorro, who wasn’t involved in the study. Alternatively, he says, the signals could be produced as pulses of electrical current propagate back and forth along tendrils of lightning that have already formed, rather than from the stop-and-go advancements of a new bolt. On Earth, such currents cause some bolts to appear to flicker.
But stop and go seems like a sound interpretation, says atmospheric physicist Yoav Yair of Reichman University in Herzliya, Israel. Kolmašová and her colleagues “show that if you’re discharging a cloud … the physics remains basically the same [on Jupiter as on Earth], and the current will behave the same.”
If that universality is real, it could have implications for the search for life elsewhere. Experiments have shown that lightning strikes on Earth could have smelted some of the chemical ingredients needed to form the building blocks of life (SN: 3/16/21). If lightning is discharging in a similar way on alien worlds, Yair says, then it could be producing similar ingredients in those places too.
Planetary scientists now know how thick the Martian crust is, thanks to the strongest Marsquake ever observed.
On average, the crust is between 42 and 56 kilometers thick, researchers report in a paper to appear in Geophysical Research Letters. That’s roughly 70 percent thicker than the average continental crust on Earth.
The measurement was based on data from NASA’s InSight lander, a stationary seismometer that recorded waves rippling through Mars’ interior for four Earth years. Last May, the entire planet shook with a magnitude 4.7 quake that lasted more than six hours (SN: 5/13/22). “We were really fortunate that we got this quake,” says seismologist Doyeon Kim of ETH Zurich. InSight recorded seismic waves from the quake that circled Mars up to three times. That let Kim and colleagues infer the crust thickness over the whole planet.
Not only is the crust thicker than that of the Earth and the moon, but it’s also inconsistent across the Red Planet, the team found. And that might explain a known north-south elevation difference on Mars.
Topological and gravity data from Mars orbiters have shown that the planet’s northern hemisphere is substantially lower than the southern one. Researchers had suspected that density might play a part: Perhaps the rocks that make up northern Mars have a different density than those of southern Mars.
But the crust is thinner in the northern hemisphere, Kim and colleagues found, so the rocks in both hemispheres probably have the same average densities. That finding helps scientists narrow down the explanations for why the difference exists in the first place.
Knowing the crust’s depth, the team also calculated that much of Mars’ internal heat probably originates in the crust. Most of this heat comes from radioactive elements such as potassium, uranium and thorium. An estimated 50 to 70 percent of those elements are probably in the crust rather than the underlying mantle, computer simulations suggest. That supports the idea that parts of Mars still have volcanic activity, contrary to a long-held belief that the Red Planet is dead (SN: 11/3/22).
An ancient vegetarian dinosaur from the French countryside has given paleontologists something to sink their teeth into.
The most striking feature of a new species of rhabdodontid that lived from 84 million to 72 million years ago is its oversized, scissorslike teeth, paleontologist Pascal Godefroit, of the Royal Belgian Institute of Natural Sciences in Brussels, and his colleagues report October 26 in Scientific Reports. Compared with other dinos of its kind, Matheronodon provincialis’ teeth were at least twice as large but fewer in number. Some teeth reached up to 6 centimeters long, while others grew up to 5 centimeters wide. They looked like a caricature of normal rhabdodontid teeth, Godefroit says. Of hundreds of fossils unearthed over the last two decades at a site called Velaux-La Bastide Neuve in the French countryside, a handful of jaw bones and teeth now have been linked to this new species, Matheronodon provincialis. The toothy dino belongs to a group of herbivorous, bipedal dinosaurs common in the Cretaceous Period. Rhabdodontids sported bladelike teeth, and likely noshed on the tough woody tissue parts of plants. Palm trees, common in Europe at the time, might have been on the menu.
Rhabdodontid teeth have ridges covered by a thick layer of enamel on one side and little to no ridges or enamel on the other. Teeth in the upper jaw have more ridges and enamel on the outer edge, while the reverse is true for bottom teeth. A closer look at the microstructure of M. provincialis’ teeth revealed an exaggerated version of this — many more ridges and lopsided enamel coating. Enamel typically protects from wear and tear, so chewing would have sharpened the dino’s teeth. “They operated like self-sharpening serrated scissors,” Godefroit says.
Light-sensitive cells in the eyes of some fish do double-duty. In pearlsides, cells that look like rods — the stars of low-light vision — actually act more like cones, which only respond to brighter light, researchers report November 8 in Science Advances. It’s probably an adaptation to give the deep-sea fish acute vision at dawn and dusk, when they come to the surface of the water to feed.
Rods and cones studding the retina can work in tandem to give an animal good vision in a wide variety of light conditions. Some species that live in dark environments, like many deep-sea fish, have dropped cones entirely. But pearlside eyes have confused scientists: The shimmery fish snack at the water’s surface at dusk and dawn, catching more sun than fish that feed at night. Most animals active at these times of day use a mixture of rods and cones to see, but pearlside eyes appear to contain only rods. “That’s actually not the case when you look at it in more detail,” says study coauthor Fanny de Busserolles, a sensory biologist at the University of Queensland in Australia.
She and her colleagues investigated which light-responsive genes those rod-shaped cells were turning on. The cells were making light-sensitive proteins usually found in cones, the researchers found, rather than the rod-specific versions of those proteins.
These rodlike cones still have the more elongated shape of a rod. And like regular rods, they are sensitive to even small amounts of light. But the light-absorbing proteins inside match those found in cones, and are specifically tuned to respond to the blue wavelengths of light that dominate at dawn and dusk, the researchers found. The fish don’t have color vision, though, which relies on having different cones sensitive to different wavelengths of light.
“Pearlsides found a more economical and efficient way of seeing in these particular light conditions by combining the best characteristics of both cell types into a single cell,” de Busserolles says. A few other animals have also been found to have photoreceptors that fall somewhere between traditional rods and cones, says Belinda Chang, an evolutionary biologist at the University of Toronto who wasn’t involved in the study. Chang’s lab recently identified similar cells in the eyes of garter snakes. “These are thought to be really cool and unusual receptors,” she says.
Together, finds like these begin to challenge the idea that rods and cones are two separate visual systems, de Busserolles says. “We usually classify photoreceptors into rods or cones only,” she says. “Our results clearly show that the reality is more complex than that.”