This deep-sea fish uses weird eyes to see in dark and light

Light-sensitive cells in the eyes of some fish do double-duty. In pearlsides, cells that look like rods — the stars of low-light vision — actually act more like cones, which only respond to brighter light, researchers report November 8 in Science Advances. It’s probably an adaptation to give the deep-sea fish acute vision at dawn and dusk, when they come to the surface of the water to feed.

Rods and cones studding the retina can work in tandem to give an animal good vision in a wide variety of light conditions. Some species that live in dark environments, like many deep-sea fish, have dropped cones entirely. But pearlside eyes have confused scientists: The shimmery fish snack at the water’s surface at dusk and dawn, catching more sun than fish that feed at night. Most animals active at these times of day use a mixture of rods and cones to see, but pearlside eyes appear to contain only rods.
“That’s actually not the case when you look at it in more detail,” says study coauthor Fanny de Busserolles, a sensory biologist at the University of Queensland in Australia.

She and her colleagues investigated which light-responsive genes those rod-shaped cells were turning on. The cells were making light-sensitive proteins usually found in cones, the researchers found, rather than the rod-specific versions of those proteins.

These rodlike cones still have the more elongated shape of a rod. And like regular rods, they are sensitive to even small amounts of light. But the light-absorbing proteins inside match those found in cones, and are specifically tuned to respond to the blue wavelengths of light that dominate at dawn and dusk, the researchers found. The fish don’t have color vision, though, which relies on having different cones sensitive to different wavelengths of light.

“Pearlsides found a more economical and efficient way of seeing in these particular light conditions by combining the best characteristics of both cell types into a single cell,” de Busserolles says.
A few other animals have also been found to have photoreceptors that fall somewhere between traditional rods and cones, says Belinda Chang, an evolutionary biologist at the University of Toronto who wasn’t involved in the study. Chang’s lab recently identified similar cells in the eyes of garter snakes. “These are thought to be really cool and unusual receptors,” she says.

Together, finds like these begin to challenge the idea that rods and cones are two separate visual systems, de Busserolles says. “We usually classify photoreceptors into rods or cones only,” she says. “Our results clearly show that the reality is more complex than that.”

Ancient European farmers and foragers hooked up big time

Thousands of years ago, hunter-gatherers native to Europe and incoming farmers from what’s now Turkey got up close and personal for a surprisingly long time, researchers say. This mixing reshaped the continent’s genetic profile differently from one region to another.

Ancient DNA from foragers and farmers in eastern, central and western Europe indicates that they increasingly mated with each other from around 8,000 to nearly 4,000 years ago, a team led by geneticist Mark Lipson of Harvard Medical School in Boston reports online November 8 in Nature. That time range covers much of Europe’s Neolithic period, which was characterized by the spread of farming, animal domestication and polished stone tools.
The new findings lend support to the idea that Europe and western Asia witnessed substantial human population growth and migrations during the Neolithic, says archaeologist Peter Bellwood of Australian National University in Canberra. So much mating occurred over such a long time that “geneticists can no longer assume that living people across Europe are a precise reflection of European genetic history,” he says.
Previous studies of ancient DNA indicated that farmers in Anatolia (modern Turkey) migrated into Europe roughly 8,000 years ago. Researchers generally assumed that newcomers and native hunter-gatherers interbred at first, perhaps as a single wave of farmers moved through Europe to the Atlantic coast, Lipson says. From this perspective, foragers either joined farming cultures or abandoned their home territories and scattered elsewhere. But it now appears that, after a major migration of farmers into Europe, many groups of farmers and hunter-gatherers living in particular regions mingled to varying extents for many centuries, the researchers say.
“Even though there weren’t any major new migrations into Europe after the arrival of farmers, there were ongoing ancestry changes throughout the Neolithic due to interactions between farmers and hunter-gatherers,” Lipson says. Central and northern Europeans next experienced large DNA changes at the start of the Bronze Age around 5,000 years ago, with the arrival of nomadic herders from western Asia (SN: 7/11/15, p. 11).

Lipson’s team analyzed DNA extracted from the skeletons of 154 farmers from Hungary, Germany and Spain, dating to around 8,000 to 4,200 years ago. The farmers’ DNA was compared with DNA from three Neolithic hunter-gatherers found in Hungary, Luxembourg and Spain; a fourth hunter-gatherer from Italy dating to about 14,000 years ago; and 25 Anatolian farmers from as early as 8,500 years ago.

Farmers in each European region displayed increasing amounts of hunter-gatherer ancestry over time, with highs of about 10 percent in Hungary and 20 percent in Germany by around 5,000 years ago, and about 30 percent in Spain by 4,200 years ago. Three farmers from a 6,000-year-old site in Germany fell outside the general trend for that part of Neolithic Europe, displaying 40 to 50 percent hunter-gatherer ancestry.

Genes got passed from farmers to hunter-gatherers as well, although skeletal remains of Neolithic hunter-gatherers are much scarcer than those of their cultivating contemporaries. A hunter-gatherer from the 6,000-year-old German site, identified via chemical markers of diet in the bones, carried around 27 percent ancestry from farmers. A hunter-gatherer discovered at a Hungarian farming site dating to roughly 7,700 years ago possessed about 20 percent ancestry from farmers. Still, previous work has shown neighboring European farmers and hunter-gatherers sometimes kept their distance (SN: 11/16/13, p. 13).

Despite this unexpected evidence of long-term mating among communities with different cultures and styles, the tempo of genetic change and the population sizes of farmers and hunter-gatherers remain poorly understood, says archaeologist Alasdair Whittle of Cardiff University in Wales.

Parents may one day be morally obligated to edit their baby’s genes

A doctor explains to a young couple that he has screened the pair’s in vitro fertilized embryos and selected those that had no major inheritable diseases. The couple had specified they want a son with hazel eyes, dark hair and fair skin. Then the doctor announces that he has also taken the liberty of eliminating the “burden” of genetic propensities for baldness, nearsightedness, alcoholism, obesity and domestic violence.

The prospective mother replies that they didn’t want those revisions. “I mean diseases, yes, but …” Her husband jumps in to say, “We were just wondering if it’s good to leave a few things to chance.”
But the doctor reminds the would-be parents why they came to him in the first place. They want to give their child “the best possible start.”

That’s a scene from the movie Gattaca, which premiered 20 years ago in October. But thanks to recent advances in gene-editing tools such as CRISPR/Cas9, genetic manipulation of human embryos is becoming reality.

Soon, designer babies like those described in the film may even become morally mandatory, some ethicists say.

Gattaca’s narrator tells us that such genetic manipulation of in vitro fertilized embryos has become “the natural way of giving birth” in the near future portrayed in the film. It has also created an underclass of people whose parents didn’t buy those genetic advantages for their children.
Until recently, that sort of fiddling with human DNA was only science fiction and allegory, a warning against a new kind of eugenics that could pit the genetic haves and have-nots against each other. At a symposium sponsored by the Hastings Center on October 26 before the World Conference of Science Journalists in San Francisco, ethicists and journalists explored the flip side of that discussion: whether parents have a moral obligation to make “better” babies through genetic engineering. Technology that can precisely change a baby’s genes is quickly becoming reality. This year, scientists reported using CRISPR/Cas9 in viable human embryos to fix mutations that cause heart and blood disorders. CRISPR/Cas9 acts as a molecular scissors that relatively easily and precisely manipulates DNA. Scientists have honed and developed the tool in the roughly five years it has been around, creating myriad “CRISPR” mice, fish, pigs, cows, plants and other creatures. Its use in human embryos has been hotly debated. Should we or shouldn’t we?

For many people, the fear of a class of genetically enhanced people is reason enough not to tinker with the DNA of the human germline — eggs, sperm, embryos and the cells that give rise to eggs and sperm. By all means, correct diseases, these folks say, but don’t add extras or meddle with characteristics that don’t have anything to do with health. A panel of ethicists convened by the U.S. National Academies of Medicine and Science also staked out that position in February, ruling that human germline engineering might someday be permissible for correcting diseases, but only if there are no alternatives and not for enhancements.

But the question “should we?” may not matter much longer, predicted the Hastings Center’s Josephine Johnston at the symposium. As science advances and people become more comfortable with gene editing, laws prohibiting tinkering with embryos will fall, she said, and it will be up to prospective moms and dads to decide for themselves. “Will editing a baby’s genes be mandatory, the kind of thing you’re supposed to do?”

For Julian Savulescu, an ethicist at the University of Oxford, the answer is yes. Parents are morally obligated to take steps to keep their children healthy, he says. That includes vaccinating them and giving them medicine when they’re ill. Genetic technologies are no different, he argues. If these techniques could make children resistant to infections, cancer or diabetes, then parents have an obligation to use them, he says.

For now, he cautions, CRISPR’s safety and efficacy haven’t been established, so parents shouldn’t subject their children to the risks. He also points out that this sort of editing would also require in vitro fertilization, which is prohibitively costly for many people. (And couples could pretty much forget about having the perfect baby through sexual intercourse. Designer darlings would have to be created in the lab.)

But someday, possibly soon, gene editing could become a viable medical intervention. “If CRISPR were safe and not excessively costly, we have a moral obligation to use it to prevent and treat disease,” Savulescu says.

Using gene editing to cure genetic diseases is something retired bioethicist Ronald Green of Dartmouth College can get behind. “I fully support the reproductive use of gene-editing technology for the prevention and elimination of serious genetic diseases,” Green said at the symposium. “If we could use gene editing to remove the sequences in an embryo that cause sickle cell disease or cystic fibrosis, I would say not only that we may do so, but in the case of such severe diseases, we have a moral obligation to do so.”

But that’s where a parent’s obligation stops, Green said. Parents and medical professionals aren’t required to enhance health “to make people who are better than well,” he said.

Savulescu, however, would extend the obligation to other nondisease conditions that could prevent a kid from having a full set of opportunities in life. For instance, children with poor impulse control may have difficulty succeeding in school and life. The drug Ritalin is sometimes prescribed to such kids. “If CRISPR could do what Ritalin does and improve impulse control and give a child a greater range of opportunities,” he says, “then I’d have to say we have the same moral obligation to use CRISPR as we do to provide education, to provide an adequate diet or to provide Ritalin.”

Green rejected the idea that parents should, or even could, secure a better life for their kids through genetic manipulation. Scientists haven’t identified all the genes that contribute to good lives — and there are plenty of factors beyond genetics that go into making someone happy and successful. Already, Green said, “the healthy natural human genome has enough variety in it to let any child successfully navigate the world and fulfill his or her own vision of happiness.” (A version of his remarks was posted on the Hastings Center’s Bioethics Forum.)

Many traits that would help a person make more money or have an easier life are associated with social prejudices and discrimination, says Marcy Darnovsky, the executive director of the Center for Genetics and Society in Berkeley, Calif. People who are taller and fair-skinned tend to make more money. If parents were to engineer their children to have such traits, “I think we would be inscribing those kinds of social prejudices in biology,” she says. “We get to very troubled waters very quickly as a society once we start down that road.”

Creating a class of “genobility,” as Green calls genetically enhanced people, would increase already staggering levels of inequality, Darnovsky says. That, says Savulescu, “is the Gattaca objection I often get.”

Yes, he acknowledges, “it could create even greater inequalities, there’s no doubt about that.” Whenever money is involved, people who have more of it can afford better treatments, diets and healthier lifestyles — and disparities will exist. “However, this is not inevitable,” Savulescu says. Countries with national health care systems could provide such services for free. Such measures could even correct natural inequalities, he argues.

Johnston worries that genetic manipulation could change family dynamics. Parents might be disappointed if their designer baby doesn’t turn out as desired. That’s a variation of the old problem of unfulfilled parental expectations, Savulescu says. “It’s a problem that deserves attention, but it’s not a problem that deserves banning CRISPR,” he says.

What the Pliocene epoch can teach us about future warming on Earth

Imagine a world where the polar ice sheets are melting, sea level is rising and the atmosphere is stuffed with about 400 parts per million of carbon dioxide. Sound familiar? It should. We’re living it. But the description also matches Earth a little over 3 million years ago, in the middle of the geologic epoch known as the Pliocene.

To understand how our planet might respond as global temperatures rise, scientists are looking to warm periods of the past. These include the steamy worlds of the Cretaceous Period, such as around 90 million years ago, and the boundary of the Paleocene and Eocene epochs, about 56 million years ago.
But to many researchers, the best reference for today’s warming is the more recent Pliocene, which lasted from 5.3 million to 2.6 million years ago. The mid-Pliocene was the last time atmospheric CO2 levels were similar to today’s, trapping heat and raising global temperatures to above the levels Earth is experiencing now.

New research is illuminating how the planet responded to Pliocene warmth. One set of scientists has fanned out across the Arctic, gathering geologic clues to how temperatures there may have been as much as 19 degrees Celsius higher than today. The warmth allowed trees to spread far to the north, creating Arctic forests where three-toed horses, giant camels and other animals roamed. When lightning struck, wildfires roared across the landscape, spewing soot into the air and altering the region’s climate.
Other researchers are pushing the frontiers of climate modeling, simulating how the oceans, atmosphere and land responded as Pliocene temperatures soared. One new study shows how the warmth may have triggered huge changes in ocean circulation, setting up an enormous overturning current in the Pacific Ocean, similar to the “conveyor belt” in today’s Atlantic that drives weather and climate. A second new paper suggests that the Greenland and Antarctic ice sheets might have responded differently to Pliocene heat, melting at different times.

All this research into the last great warm period is helping scientists think more deeply about how the future might play out. It may not be a road map to the next 100 years, but the Pliocene is a rough guide to the high sea levels, vanishing ice and altered weather patterns that might arrive hundreds to thousands of years from now.

“It’s a case study for understanding how warm climates function,” says Heather Ford, a paleoceanographer at the University of Cambridge. “It’s our closest analog for future climate change.”

Walk through history
Teasing out the history of the Pliocene is a little like digging through a family’s past. One group of enthusiasts goes through genealogical records, collecting data on who lived where, and when. Another group uses computer software and modeling to look for broad patterns that describe how the family grew and moved over time.

The data detectives begin their work in rocks and sediments dating to the Pliocene that are scattered around the world like family-tree histories in city library archives. In 1988, the U.S. Geological Survey began a project called PRISM, for Pliocene Research, Interpretation and Synoptic Mapping, which aims to gather as many geologic clues as possible about Pliocene environments.
At its start, PRISM focused on a collection of deep-sea cores drilled from the floor of the North Atlantic Ocean. Different types of marine organisms thrive in water of different temperatures. By comparing the relative abundance of species of tiny organisms preserved in the deep-sea cores, PRISM scientists could roughly map how cold-loving organisms gave way to warm ones (and vice versa) at different times in the past. Early results from the project, reported in 1992 by USGS research geologist Harry Dowsett and colleagues, showed that during the Pliocene, warming was amplified at higher latitudes in the North Atlantic.

Scientists continue to add to the PRISM records. One international team drilled a sediment core from beneath a Siberian lake and found that summer air temperatures there, in the mid-Pliocene, were as high as 15° C (about 59° Fahrenheit). That’s 8 degrees warmer than today (SN: 6/15/13, p. 13). Other researchers uncovered clues, such as plant fossils from peat bogs, that suggest mean annual temperatures on Canada’s now-frozen Ellesmere Island near Greenland were as much as 18 degrees higher than today (SN: 4/6/13, p. 9).

Now, a new group of biologists, geoscientists and other experts in past landscapes have banded together in a project called PoLAR-FIT, for Pliocene Landscape and Arctic Remains — Frozen in Time. The team is focusing on the Arctic because, just as today’s Arctic is warming faster than other parts of the planet, the Pliocene Arctic warmed more than the rest of the globe. “That’s what we call polar amplification,” says Tamara Fletcher, a team member and paleoecologist at the University of Montana in Missoula. “It was even more magnified in the Pliocene than what we’re seeing today.”

PoLAR-FIT scientists travel to the Arctic to collect geologic evidence about how the region responded to rising temperatures in the Pliocene. In the thawing permafrost slopes of Ellesmere Island, for instance, Fletcher and colleagues have been mapping black layers of charcoal in sediments dating from the Pliocene. Each charcoal layer represents a fire that burned through the ancient forest. By tracking the events across Ellesmere and other nearby islands, Fletcher’s team discovered that fire was widespread across what is now the Canadian Arctic.
Wildfires changed vegetation across the landscape, possibly altering how the Arctic responded to rising temperatures. Soot rising from the fires would have darkened the skies, potentially leading to local or regional weather changes. “How important is that to the warming?” asks Bette Otto-Bliesner, a paleoclimatologist at the National Center for Atmospheric Research in Boulder, Colo. “That’s something we’re still trying to determine.” Fletcher, Otto-Bliesner and colleagues described the charcoal discovery, along with modeling studies of the fires’ effects, in Seattle in October at a meeting of the Geological Society of America.

In 2012, about 283,280 square kilometers of forest burned in Russia. Three years later, more than 20,230 square kilometers burned in Alaska. Last summer, a wildfire broke out in the icy landscape of western Greenland. “We’re already seeing fire in the Arctic, which is unusual today,” Fletcher says. “But it wouldn’t have been unusual in the Pliocene.”

While the work doesn’t predict how much of the Arctic will burn as temperatures rise, the findings do suggest that people need to prepare for more fires in the future.
Trapped ocean heat
Scientists like Fletcher are the genealogists of the Pliocene, collecting records of past environments. Other researchers — the computer modelers — put those old records into broad context, like historians analyzing family trees for patterns of migration and change.

The modelers begin with data on Pliocene temperatures — such as how hot it got on Ellesmere Island or in the North Atlantic Ocean, as revealed by plant fossils or seafloor sediments. Scientists can also estimate how much CO2 was in the atmosphere at the time by looking at clues such as the density of holes in fossilized leaves of Pliocene plants, which used those openings to take up CO2. Estimates vary, but most suggest CO2 levels were about 350 to 450 ppm in the mid-Pliocene.
It’s not clear what caused the gas buildup during the Pliocene; one possibility is it came from long-term changes in the way carbon cycles between the land, ocean and atmosphere. But no matter the source, the high levels of CO 2 caused temperatures to soar by trapping heat in the atmosphere.
The Pliocene isn’t a perfect crystal ball for today. For starters, scientists know why CO2 levels are now increasing — burning of fossil fuels and other human activities (SN: 5/30/15, p. 15). As the Industrial Revolution was gaining steam, in the 19th century, atmospheric CO2 levels were around 280 ppm. Today that level is just above 400 ppm, and rising.

Modeling the Pliocene climate can help reveal how Earth responded in somewhat similar conditions. That means studying changes in the Pliocene atmosphere, the land surface and most of all the oceans, which absorb the bulk of planetary warming. “That’s the sort of thing you can understand from studying past warm episodes,” Ford says. “What was different about how heat and carbon were moving around in the ocean?”

Ford has begun working with climatologist Natalie Burls of George Mason University in Fairfax, Va., to try to track how the oceans’ major water masses shifted during the Pliocene. Today the North Atlantic has a deep, cold, salty layer that is crucial to the ocean’s “conveyor belt” circulation. In this pattern, warm waters flow northward from the tropics, then cool and become saltier and denser as they reach higher latitudes. That cool water sinks and travels southward, where it warms and rises and begins the cycle all over again.

This conveyor belt circulation is important to today’s Atlantic climate, because it allows the warm Gulf Stream to moderate temperatures from the U.S. East Coast to Western Europe. Burls and colleagues have now found that a similar pattern might have existed in the Pacific during the Pliocene. They call it the Pacific meridional overturning circulation, or PMOC, just as today’s similar Atlantic circulation is known as the AMOC.

Burls’ team discovered this phenomenon by modeling how the Pliocene ocean would have responded to higher temperatures. Because the Arctic was so warm, the temperature difference between the equator and the mid- and high latitudes was not as great as it is today. The weaker temperature gradient would have meant less rainfall and more evaporation in the midlatitude North Pacific. As a result, its uppermost waters would have gotten saltier.
When the North Pacific waters got salty enough, they cooled and sank, setting up an enormous current that dove deep off the coast of northeastern Russia and traveled southward until the water warmed enough to once again rise toward the surface. Real-world data back the claim: Accumulations of calcium carbonate in deep-sea Pacific sediments show that the Pliocene ocean experienced huge shifts at the time, with waters churning all the way from the surface down to about three kilometers deep, as would be expected from a conveyor belt–type circulation. The team reported the finding in Science Advances in September.

What happened in the Pliocene Pacific may say something about the Pacific of the distant future, Burls says. As temperatures rise today, most of the heat is being taken up by the surface layers of the oceans. Over the short term, that works to prevent changes in deep ocean circulation. “Today we’re very quickly turning on the heating, and it will take a while for the deep ocean to adjust,” Burls says.

But in the longer term, thousands of years from now, waters in the North Pacific may eventually become warm and salty enough to establish a PMOC, just as there was in the Pliocene. And that could lead to major changes in weather and climate patterns around the globe.

Land bridges and ice sheets
Other modelers are looking beyond the Pacific to improve their understanding of how different parts of the Pliocene world behaved. About a dozen research groups recently launched a new effort called PlioMIP2, or Pliocene Model Intercomparison Project, Phase 2, to model the climate of a time somewhat similar to today in the mid-Pliocene, about 3.205 million years ago.

“We’re working to produce the best picture that we can of what life seemed to be like at the time,” says Alan Haywood, a climate modeler at the University of Leeds in England and a leader of the effort.

In one discovery, project scientists have found that small changes in the geography of their modeled world make a big improvement in the final results. Early models did not accurately capture how much the polar regions heated up. So PlioMIP2 researchers updated their starting conditions. Instead of assuming that the landmasses of the Pliocene world were identical to today, the group made two small, plausible changes in the Arctic. The researchers made a land bridge between Russia and Alaska by closing the Bering Strait, and they added land to connect a few modern islands in the Canadian Arctic, including Ellesmere.

The change “seems small, but it actually can have a huge impact on climate,” says Otto-Bliesner. For instance, closing the Bering Strait cut off a flow in which relatively fresh water from the Pacific travels over the Arctic and into the North Atlantic. With the updated geography, the PlioMIP2 models suddenly did a much better job of simulating heat in the high Arctic.

Otto-Bliesner will describe the team’s results in New Orleans this month at a meeting of the American Geophysical Union. Another PlioMIP2 group, Deepak Chandan and Richard Peltier of the University of Toronto, reported similar findings in July in Climate of the Past. They too found that closing the Bering Strait allowed their model to better simulate the Arctic heating.

Other Pliocene modelers are trying to figure out how the planet’s enormous ice sheets in Greenland and Antarctica might respond to rising temperatures. Geologic evidence, such as ancient beaches from the Pliocene, suggest that global sea levels then were as much as 25 meters higher than today. If all of Greenland’s ice were to melt, global sea levels would rise about six meters; if all of Antarctica went, it would contribute about 60 meters. So parts of these ice sheets, but not all, must have melted during the long-ago warm period.

Several of the PlioMIP2 research groups are modeling how the polar ice sheets responded in the heat of the Pliocene. “It will tell us how much we should be worried,” Otto-Bliesner says.
One new study suggests that the northern and southern ice sheets may have behaved out of phase with each other. In a simulation of the mid- to late Pliocene, climate modeler Bas de Boer of Utrecht University in the Netherlands and colleagues found that as Greenland’s ice melted, Antarctica’s ice could have been relatively stable, and vice versa.

“At different points, they could be contributing to the sea level story or against it,” says Haywood. He, along with colleagues, reported the results in the Oct. 30 Geophysical Research Letters.

That out-of-sync melting suggests the Pliocene was a complicated time. Just because global temperatures were high doesn’t mean that all of Earth’s ice sheets melted equally. (Today, both Greenland and West Antarctica are losing ice to the oceans as global temperatures rise.)

The Pliocene wound to an end around 2.6 million years ago, as CO2 levels dropped. Chemical reactions with eroding rocks may have sucked much of the CO2 out of the atmosphere and tucked it away in the oceans, removing the greenhouse gas. The planet entered a long-term cooling trend. Since the end of the Pliocene, Earth has been in and out of a series of ice ages.

But now, greenhouse gases are once again flooding into the atmosphere. Global temperatures are ticking up inexorably year after year. That makes the lessons of the past all the more relevant for the future.

Some high-temperature superconductors might not be so odd after all

A misfit gang of superconducting materials may be losing their outsider status.

Certain copper-based compounds superconduct, or transmit electricity without resistance, at unusually high temperatures. It was thought that the standard theory of superconductivity, known as Bardeen-Cooper-Schrieffer theory, couldn’t explain these oddballs. But new evidence suggests that the standard theory applies despite the materials’ quirks, researchers report in the Dec. 8 Physical Review Letters.

All known superconductors must be chilled to work. Most must be cooled to temperatures that hover above absolute zero (–273.15° Celsius). But some copper-based superconductors work at temperatures above the boiling point of liquid nitrogen (around –196° C). Finding a superconductor that functions at even higher temperatures — above room temperature — could provide massive energy savings and new technologies (SN: 12/26/15, p. 25). So scientists are intent upon understanding the physics behind known high-temperature superconductors.
When placed in a magnetic field, many superconductors display swirling vortices of electric current — a hallmark of the standard superconductivity theory. But for the copper-based superconductors, known as cuprates, scientists couldn’t find whirls that matched the theory’s predictions, suggesting that a different theory was needed to explain how the materials superconduct. “This was one of the remaining mysteries,” says physicist Christoph Renner of the University of Geneva. Now, Renner and colleagues have found vortices that agree with the theory in a high-temperature copper-based superconductor, studying a compound of yttrium, barium, copper and oxygen.

Vortices in superconductors can be probed with a scanning tunneling microscope. As the microscope tip moves over a vortex, the instrument records a change in the electrical current. Renner and colleagues realized that, in their copper compound, there were two contributions to the current that the probe was measuring, one from superconducting electrons and one from nonsuperconducting ones. The nonsuperconducting contribution was present across the entire surface of the material and masked the signature of the vortices.

Subtracting the nonsuperconducting portion revealed the vortices, which behaved in agreement with the standard superconductivity theory. “That, I think, is quite astonishing; it’s quite a feat,” says Mikael Fogelström of Chalmers University of Technology in Gothenburg, Sweden, who was not involved with the research.
The result lifts some of the fog surrounding cuprates, which have so far resisted theoretical explanation. But plenty of questions still surround the materials, Fogelström says. “It leaves many things still open, but it sort of gives a new picture.”

Not all strep infections are alike and it may have nothing to do with you

One person infected with strep bacteria might get a painful sore throat; another might face a life-threatening blood infection. Now, scientists are trying to pin down why.

Variation between individuals’ immune systems may not be entirely to blame. Instead, extra genes picked up by some pathogens can cause different strains to have wildly different effects on the immune system, even in the same person, researchers report January 11 in PLOS Pathogens.

The idea that different strains of bacteria can behave differently in the body isn’t new. Take E. coli: Some strains of the bacteria that can cause foodborne illness make people far sicker than other strains­. But bacteria have exceptionally large amounts of genetic variation, even between members of the same species. Scientists are still trying to figure out how that genetic diversity affects the way microbes interact with the immune system.
Any species of bacteria has a core set of genes that all its members share. Then there’s a whole pot of genes that different strains of the species pick and choose to create what’s known as an accessory genome. These genes are custom add-ons that specific strains have acquired over time, from their environment or from other microbes — something like an expansion pack for a card game. Sometimes, that extra genetic material gives bacteria new traits.

Uri Sela and his colleagues at the Rockefeller University in New York City tested the way these extra genes influenced the way two common species of bacteria, Staphylococcus aureus and Streptococcus pyogenes, interacted with the immune system. Staphylococcus bacteria can cause everything from rashes to food poisoning to blood infections. Streptococcus bacteria can cause strep throat, as well as a host of more serious illnesses (SN: 10/4/14, p. 22).

Different strains of the same species provoked wildly different immune responses in blood samples collected from the same patient, the researchers first showed. But the strain-specific responses were consistent across patients. Some strains triggered lots of T cells to be made in every sample, for example; others increased B cell activity. (T cells and B cells are the two main weapons of the adaptive immune response, which enables the body to build long-lasting immunity against a particular pathogen.) In tests of strains missing some of their extra genes, though, the T cells didn’t respond as strongly as they did to a matching strain that contained the extra genes. This finding suggests that the variation in immune response across strains was coming, at least in part, from differences in these supplementary genes.
“Currently when a patient comes to the hospital with an infection, we don’t define the strain of the species” for common infections like strep and staph, says Sela, an immunologist. In the future, he says, information about the strain could help doctors predict how a patient’s illness will unfold and decide on the best treatment.

The new study “adds fuel to an active debate” about the role of accessory genes, says Alan McNally, a microbiologist at the University of Birmingham in England — whether or not the collections of genetic add-ons that bacteria maintain are shaped by natural selection, the process that fuels evolution. This research suggests that for some kinds of bacteria, genetic customization might aid survival of certain strains by enabling them to provoke a tailored immune response.

But more research needs to be done to link the strain-to-strain variation in immune response to the accessory genome, he says, as this study looked at only a few extra genes, not the entire accessory genome.

Ancient kids’ toys have been hiding in the archaeological record

Youngsters have probably been playing their way into cultural competence for at least tens of thousands of years. So why are signs of children largely absent from the archaeological record?

A cartoon that Biblical scholar Kristine Garroway taped up in her college dorm helps to explain kids’ invisibility at ancient sites: Two men in business suits stare intently at an unidentifiable round object sitting on a table. “Hey, what’s this?” asks the first guy. “I dunno, probably a toy … or a religious object,” says the second.
Archaeologists have long tended to choose the second option, says Garroway, now a visiting scientist at Hebrew Union College–Jewish Institute of Religion in Los Angeles. Ambiguous finds, such as miniature pottery vessels and small figurines, get classified as ritual or decorative objects. Some of these artifacts undoubtedly were used in ceremonies. But not all of them, Garroway argues.
Of 48 miniature clay vessels excavated from inside roughly 3,650- to 4,000-year-old houses at Israel’s Tel Nagila site, 10 retained fingerprints the size of children’s that were made during the shaping of soft clay, before the clay was heated and hardened, archaeologists reported in 2013. Kids must have made
those somewhat unevenly shaped jars and bowls, each easily held within a child’s hand, concluded Joe Uziel of the Israel Antiquities Authority in Jerusalem and independent Israeli researcher Rona Avissar Lewis in Palestine Exploration Quarterly.
Unusual finds in Israel dating to around 3,000 years ago also represent children’s early attempts to mimic adult craftwork, Garroway said in a November 18 presentation in Boston at the annual meeting of the American Schools of Oriental Research. Numerous rounded clay disks, each pierced with two holes, have mystified investigators for nearly a century. As early as 1928, an archaeologist suggested that these button-sized objects were toys.
After passing a string through both of a disk’s holes and tying the ends together, a youngster could swing the string to wind up the toy and then pull both ends of the string to make the disk spin. Clay disks from six Israeli sites can be separated into those made by skilled artisans and others — featuring rough edges and unevenly spaced holes — made by novices, including children, Garroway proposes. If those items were toys, sloppy execution may have partly resulted from children’s impatience to play with the final product, she suspects.

Garroway’s proposal appears likely, especially in light of evidence that more than 10,000 years earlier, people in France and Spain made similar spinning disks decorated with animals that appeared to move as the toy twirled (SN: 6/30/12, p. 12), says archaeologist Michelle Langley of Griffith University in Brisbane, Australia.

Western European finds from as early as 14,000 to 21,000 years ago also may have gone unrecognized as children’s toys, Langley suggests in a paper published this month in the Oxford Journal of Archaeology. One specimen, a cave lion carved out of a reindeer’s antler, displays so much polish from handling that children may have played with the item for years, she says. Some bone spearpoints with broken tips bear signs of unskilled repair, suggesting adults gave the damaged weapons to children to practice bone-working skills and perhaps play with, she adds.

Ancient ozone holes may have sterilized forests 252 million years ago

Volcano-fueled holes in Earth’s ozone layer 252 million years ago may have repeatedly sterilized large swaths of forest, setting the stage for the world’s largest mass extinction event. Such holes would have allowed ultraviolet-B radiation to blast the planet. Even radiation levels below those predicted for the end of the Permian period damage trees’ abilities to make seeds, researchers report February 7 in Science Advances.

Jeffrey Benca, a paleobotanist at the University of California, Berkeley, and his colleagues exposed plantings of modern dwarf pine tree (Pinus mugo) to varying levels of UV-B radiation. Those levels ranged from none to up to 93 kilojoules per square meter per day. According to previous simulations, UV-B radiation at the end of the Permian may have increased from a background level of 10 kilojoules (just above current ambient levels) to as much as 100 kilojoules, due to large concentrations of ozone-damaging halogens spewed from volcanoes (SN: 1/15/11, p. 12).

Exposure to higher UV-B levels led to more malformed pollen, the researchers found, with up to 13 percent of the pollen grains deformed under the highest conditions. And although the trees survived the heightened irradiation, the trees’ ovulate cones — cones that, when fertilized by pollen, become seeds — did not. But the trees weren’t permanently sterilized: Once removed from extra UV-B exposure, the trees could reproduce again.

The finding supports previous research suggesting that colossal volcanic eruptions in what’s now Siberia, about 300,000 years before the onset of the extinction event, probably triggered the die-off of nearly all marine species and two-thirds of species living on land (SN: 9/19/15, p. 10). Repeated pulses of volcanism at the end of the Permian may have led to several periods of irradiation that sterilized the forests, causing a catastrophic breakdown of food webs, the researchers say — an indirect but effective way to kill.

This stick-on patch could keep tabs on stroke patients at home

AUSTIN, Texas — Stretchy sensors that stick to the throat could track the long-term recovery of stroke survivors.

These new Band-Aid‒shaped devices contain motion sensors that detect muscle movement and vocal cord vibrations. That sensor data could help doctors diagnose and monitor the effectiveness of certain treatments for post-stroke conditions like difficulty swallowing or talking, researchers reported February 17 in a news conference at the annual meeting of the American Association for the Advancement of Science. Up to 65 percent of stroke survivors have trouble swallowing, and about a third of survivors have trouble carrying on conversations.
The devices can monitor speech patterns more reliably than microphones by sensing tissue movement rather than recording sound. “You don’t pick up anything in terms of ambient noise,” says study coauthor John Rogers, a materials scientist and bioengineer at Northwestern University in Evanston, Ill. “You can be next to an airplane jet engine. You’re not going to see that in the [sensor] signal.”

Developed by Rogers’ team, the sensors have built-in 12-hour rechargeable batteries and continually stream motion data to a smartphone. Researchers are now testing the sensors with real stroke patients to see how the devices can be made more user-friendly. For instance, Rogers’ team realized that patients were unlikely to wear sensors that were too easily visible. By equipping the patches with more sensitive motion sensors, they can be worn lower on a person’s neck, hidden behind a buttoned-up shirt, and still pick up throat motion.

These kinds of sensors could also track the recovery of neck cancer patients, who commonly develop swallowing and speaking problems caused by radiation therapy and surgery, Rogers says. The devices can also measure breathing and heart rates to monitor sleep quality and help diagnose sleep apnea. Rogers expects this wearable tech to be ready for widespread use within the next year or two.

Some flu strains can make mice forgetful

With fevers, chills and aches, the flu can pound the body. Some influenza viruses may hammer the brain, too. Months after being infected with influenza, mice had signs of brain damage and memory trouble, researchers report online February 26 in the Journal of Neuroscience.

It’s unclear if people’s memories are affected in the same way as those of mice. But the new research adds to evidence suggesting that some body-wracking infections could also harm the human brain, says epidemiologist and neurologist Mitchell Elkind of Columbia University, who was not involved in the study.
Obvious to anyone who has been waylaid by the flu, brainpower can suffer at the infection’s peak. But not much is known about any potential lingering effects on thinking or memory. “It hasn’t occurred to people that it might be something to test,” says neurobiologist Martin Korte of Technische Universität Braunschweig in Germany.

The new study examined the effects of three types of influenza A — H1N1, the strain behind 2009’s swine flu outbreak; H7N7, a dangerous strain that only rarely infects people; and H3N2, the strain behind much of the 2017–2018 flu season misery (SN: 2/17/18, p. 12). Korte and colleagues shot these viruses into mice’s noses, and then looked for memory problems 30, 60 and 120 days later.

A month after infection, the mice all appeared to have recovered and gained back weight. But those that had received H3N2 and H7N7 had trouble remembering the location of a hidden platform in a pool of water, the researchers found. Mice that received no influenza or the milder H1N1 virus performed normally at the task.
Researchers also studied the brain tissue of the infected mice under a microscope and found that the memory problems tracked with changes in nerve cells. A month after H7N7 or H3N2 infection, mice had fewer nerve cell connectors called dendritic spines on cells in the hippocampus, a brain region involved in memory. Electrical experiments on the nerve cell samples in dishes also suggested the cells’ signal-sending abilities were impaired.
What’s more, these mice’s brains looked inflamed under the microscope, full of immune cells called microglia that were still revved up 30 and 60 days after infection. Cell counts revealed that mice that had suffered through H3N2 or H7N7 had more active microglia than mice infected with H1N1 or no virus at all. That lingering activity was surprising, Korte says; most immune cells in the body usually settle down soon after an infection clears.

These memory problems and signs of brain trouble were gone by 120 days, which translates to about a decade in human time, Korte says. “I’m not saying that everyone who has influenza is cognitively impaired for 10 years,” he says, noting that human brains are much more complex than those of mice. “The news is more that we should not only look at lung functionality after the flu, but also cognitive effects, weeks and months after infection.”

H7N7 can infect brain cells directly. But H1N1 and H3N2 don’t typically get into the brain (and Korte and colleagues confirmed that in their experiments). Some flu viruses may be causing brain trouble remotely, perhaps through inflammatory signals in the blood making their way into the brain, the study suggests. If that pathway is confirmed, then many types of infections could cause similar effects on the brain. “It is plausible that this is a general phenomenon,” Elkind says.