Saber-toothed kittens were the spitting image of their parents. Even as babies, the cats not only had the oversized canine teeth but also unusually powerful forelimbs, Katherine Long, a graduate student at California State Polytechnic University in Pomona, and colleagues report September 27 in PLOS ONE.
As adults, the ferocious felines used those strong forelimbs to secure wriggling prey before slashing a throat or belly (thereby avoiding breaking off a tooth in the struggle). Paleontologists have puzzled over whether saber-toothed cats such as Smilodon fatalis developed those robust limbs as they grew.
To compare the growth rate of Smilodon with that of similar-sized non‒saber-toothed cats that lived alongside it, Long and her team turned to fossils collected from the La Brea Tar Pits in Los Angeles. The ancient asphalt traps hold a wealth of species and specimens from juveniles to adults, dating to between 37,000 and 9,000 years ago.
The Smilodon bones, they found, did not show any evidence of an unusual growth spurt. Instead, the bones grew longer and slimmer as the kittens grew up, following the same developmental pattern as the other large cats. That suggests that when it comes to their mighty forelimbs, Smilodon kittens were just born that way.
In a pitch-black rainforest with fluttering moths and crawling centipedes, Christina Warinner dug up her first skeleton. Well, technically it was a full skeleton plus two headless ones, all seated and draped in ornate jewelry. To deter looters, she excavated through the night while one teammate held up a light and another killed as many bugs as possible.
As Warinner worked, unanswerable questions about the people whose skeletons she was excavating flew through her mind. “There’s only so much you can learn by looking with your own eyes at a skeleton,” she says. “I became increasingly interested in all the things that I could not see — all the stories that these skeletons had to tell that weren’t immediately accessible, but could be accessible through science.”
At age 21, Warinner cut her teeth on that incredibly complex sacrificial burial left behind by the Maya in a Belize rainforest. Today, at age 37, the molecular anthropologist scrapes at not-so-pearly whites to investigate similar questions, splitting her time between the University of Oklahoma in Norman and the Max Planck Institute for the Science of Human History in Jena, Germany. In 2014, she and colleagues reported a finding that generated enough buzz to renew interest in an archaeological resource many had written off decades ago: fossilized dental plaque, or calculus. Ancient DNA and proteins in the plaque belong to microbes that could spill the secrets of the humans they once inhabited — what the people ate, what ailed them, perhaps even what they did for a living.
Bacteria form plaque that mineralizes into calculus throughout a person’s life. “It’s the only part of your body that fossilizes while you’re still alive,” notes Warinner. “It’s also the last thing to decay.”
Though plaque is prolific in the archaeological record, most researchers viewed calculus as “the crap you scraped off your tooth in order to study it,” says Amanda Henry, an archaeologist at Leiden University in the Netherlands. With some exceptions, molecular biologists saw calculus as a shoddy source of ancient DNA.
But a few researchers, including Henry, had been looking at calculus for remnants of foods as potential clues to ancient diets. Inspired by some of Henry’s images of starch grains preserved in calculus, Warinner wondered if the plaque might yield dead bacterial structures, perhaps even bacteria’s genetic blueprints.
Her timing couldn’t have been better. Warinner began her graduate studies at Harvard in 2004, just after the sequencing of the human genome was completed and by the time she left in 2010, efforts to survey the human microbiome were in full swing. As a postdoc at the University of Zurich, Warinner decided to attempt to extract DNA from the underappreciated dental grime preserved on the teeth of four medieval skeletons from Germany. At first, the results were dismal. But she kept at it. “Tina has a very interested, curious and driven personality,” Henry notes. Warinner turned to a new instrument that could measure DNA concentrations in skimpy samples, a Qubit fluorometer. A surprising error message appeared: DNA too high. Dental calculus, it turned out, was chock-full of genetic material. “While people were struggling to pull out human DNA from the skeleton itself, there’s 100 to 1,000 times more DNA in the calculus,” Warinner says. “It was sitting there in almost every skeletal collection untouched, unanalyzed.” To help her interpret the data, Warinner mustered an army of collaborators from fields ranging from immunology to metagenomics. She and her colleagues found a slew of proteins and DNA snippets from bacteria, viruses and fungi, including dozens of oral pathogens, as well as the full genetic blueprint of an ancient strain of Tannerella forsythia, which still infects people’s gums today. In 2014, Warinner’s team revealed a detailed map of a miniature microbial world on the decaying teeth of those German skeletons in Nature Genetics.
Later in 2014, her group found the first direct protein-based evidence of milk consumption in the plaque of Bronze Age skeletons from 3000 B.C. That same study linked milk proteins preserved in the calculus of other ancient human skeletons to specific animals — providing a peek into long-ago lifestyles.
“The fact that you can tell the difference between, say, goat milk and cow milk, that’s kind of mind-blowing,” says Laura Weyrich, a microbiologist at the University of Adelaide in Australia, who also studies calculus. Since then, Warinner has found all sorts of odds and ends lurking on archaic chompers from poppy seeds to paint pigments. Warinner’s team is still looking at the origins of dairying and its microbial players, but she’s also branching out to the other end of the digestive spectrum. The researchers are looking at ancient DNA in paleofeces, which is exactly what it sounds like — desiccated or semifossilized poop. It doesn’t stay as fresh as plaque in the archaeological record. But she’s managed to find some sites with well-preserved samples. By examining the array of microbes that lived in the excrement and plaque of past humans and their relatives, Warinner hopes to characterize how our microbial communities have changed through time — and how they’ve changed us.
The research has implications for understanding chronic, complex human diseases over time. Warinner’s ancient DNA work “opens up a window on past health,” says Clark Larsen, an anthropologist at Ohio State University.
It’s all part of what Warinner calls “the archaeology of the unseen.”
Editor’s note: This story was corrected on October 4, 2017, to note that the 2014 report on milk consumption was based on protein evidence, not DNA.
A founding father of behavioral economics — a research school that has popularized the practice of “nudging” people into making decisions that authorities deem to be in their best interests — has won the 2017 Nobel Memorial Prize in Economic Sciences.
Richard Thaler, of the University of Chicago Booth School of Business, received the award October 9 for being the leader of a discipline that has championed the idea of humans not as purely rational and selfish — as long posited by economists. Instead, he argues, we are driven by simple, often emotionally fueled assumptions that can lead us astray.
“Richard Thaler has pioneered the analysis of ways in which human decisions systematically deviate from traditional economic models,” says cognitive scientist Peter Gӓrdenfors of Lund University, Sweden, a member of the Economic Sciences Prize Committee.
Thaler argues that, even if people try to make good economic choices, our thinking abilities are limited. In dealing with personal finances, for instance, he finds that most people mentally earmark money into different accounts, say for housing, food, vacations and entertainment. That can lead to questionable decisions, such as saving for a vacation in a low-interest savings account while buying household goods with a high-interest credit card.
At an October 9 news conference at the University of Chicago, Thaler referenced mental accounting in describing what he would do with the roughly $1.1 million award. “Every time I spend any money on something fun, I’ll say it came from the Nobel Prize.”
Thaler’s research has also focused on how judgments about fairness, such as sudden jumps in the prices of consumer items, affect people’s willingness to buy those items. A third area of his research finds that people’s short-term desires often override long-term plans. A classic example consists of putting off saving for retirement until later in life.
That research in particular inspired his 2008 book Nudge: Improving Decisions about Health, Wealth and Happiness, coauthored by Cass Sunstein, now at Harvard Law School. Nudging, also known as libertarian paternalism, is a way for public and private institutions to prod people to make certain decisions (SN: 3/18/17, p. 18). For instance, employees more often start saving for retirement early in their careers when offered savings plans that they must opt out of. Many governments, including those of the United Kingdom and the United States, have funded teams of behavioral economists, called nudge units, to develop ways to nudge people to, say, apply for government benefits or comply with tax laws. A total of 75 nudge units now exist worldwide, Thaler said at the news conference.
Nudging has its roots in a line of research, dubbed heuristics and biases, launched in the 1970s by two psychologists — 2002 economics Nobel laureate Daniel Kahneman of Princeton University and the late Amos Tversky of Stanford University. Investigators in heuristics and biases contend that people can’t help but make many types of systematic thinking errors, such as being overconfident in their decisions.
Thaler, like Kahneman, views the mind as consisting of one system for making rapid, intuitive decisions that are often misleading and a second system for deliberating slowly and considering as much relevant information as possible.
Despite the influence of Thaler’s ideas on research and social policy, they are controversial among decision researchers (SN: 6/4/11, p. 26). Some argue that nudging overlooks the power of simple rules-of-thumb for making decisions that people can learn to wield on their own.
“I don’t think I’ve changed everybody’s minds,” Thaler said. “But many young economists embrace behavioral economics.”
Imagine a world where the polar ice sheets are melting, sea level is rising and the atmosphere is stuffed with about 400 parts per million of carbon dioxide. Sound familiar? It should. We’re living it. But the description also matches Earth a little over 3 million years ago, in the middle of the geologic epoch known as the Pliocene.
To understand how our planet might respond as global temperatures rise, scientists are looking to warm periods of the past. These include the steamy worlds of the Cretaceous Period, such as around 90 million years ago, and the boundary of the Paleocene and Eocene epochs, about 56 million years ago. But to many researchers, the best reference for today’s warming is the more recent Pliocene, which lasted from 5.3 million to 2.6 million years ago. The mid-Pliocene was the last time atmospheric CO2 levels were similar to today’s, trapping heat and raising global temperatures to above the levels Earth is experiencing now.
New research is illuminating how the planet responded to Pliocene warmth. One set of scientists has fanned out across the Arctic, gathering geologic clues to how temperatures there may have been as much as 19 degrees Celsius higher than today. The warmth allowed trees to spread far to the north, creating Arctic forests where three-toed horses, giant camels and other animals roamed. When lightning struck, wildfires roared across the landscape, spewing soot into the air and altering the region’s climate. Other researchers are pushing the frontiers of climate modeling, simulating how the oceans, atmosphere and land responded as Pliocene temperatures soared. One new study shows how the warmth may have triggered huge changes in ocean circulation, setting up an enormous overturning current in the Pacific Ocean, similar to the “conveyor belt” in today’s Atlantic that drives weather and climate. A second new paper suggests that the Greenland and Antarctic ice sheets might have responded differently to Pliocene heat, melting at different times.
All this research into the last great warm period is helping scientists think more deeply about how the future might play out. It may not be a road map to the next 100 years, but the Pliocene is a rough guide to the high sea levels, vanishing ice and altered weather patterns that might arrive hundreds to thousands of years from now.
“It’s a case study for understanding how warm climates function,” says Heather Ford, a paleoceanographer at the University of Cambridge. “It’s our closest analog for future climate change.”
Walk through history Teasing out the history of the Pliocene is a little like digging through a family’s past. One group of enthusiasts goes through genealogical records, collecting data on who lived where, and when. Another group uses computer software and modeling to look for broad patterns that describe how the family grew and moved over time.
The data detectives begin their work in rocks and sediments dating to the Pliocene that are scattered around the world like family-tree histories in city library archives. In 1988, the U.S. Geological Survey began a project called PRISM, for Pliocene Research, Interpretation and Synoptic Mapping, which aims to gather as many geologic clues as possible about Pliocene environments. At its start, PRISM focused on a collection of deep-sea cores drilled from the floor of the North Atlantic Ocean. Different types of marine organisms thrive in water of different temperatures. By comparing the relative abundance of species of tiny organisms preserved in the deep-sea cores, PRISM scientists could roughly map how cold-loving organisms gave way to warm ones (and vice versa) at different times in the past. Early results from the project, reported in 1992 by USGS research geologist Harry Dowsett and colleagues, showed that during the Pliocene, warming was amplified at higher latitudes in the North Atlantic.
Scientists continue to add to the PRISM records. One international team drilled a sediment core from beneath a Siberian lake and found that summer air temperatures there, in the mid-Pliocene, were as high as 15° C (about 59° Fahrenheit). That’s 8 degrees warmer than today (SN: 6/15/13, p. 13). Other researchers uncovered clues, such as plant fossils from peat bogs, that suggest mean annual temperatures on Canada’s now-frozen Ellesmere Island near Greenland were as much as 18 degrees higher than today (SN: 4/6/13, p. 9).
Now, a new group of biologists, geoscientists and other experts in past landscapes have banded together in a project called PoLAR-FIT, for Pliocene Landscape and Arctic Remains — Frozen in Time. The team is focusing on the Arctic because, just as today’s Arctic is warming faster than other parts of the planet, the Pliocene Arctic warmed more than the rest of the globe. “That’s what we call polar amplification,” says Tamara Fletcher, a team member and paleoecologist at the University of Montana in Missoula. “It was even more magnified in the Pliocene than what we’re seeing today.”
PoLAR-FIT scientists travel to the Arctic to collect geologic evidence about how the region responded to rising temperatures in the Pliocene. In the thawing permafrost slopes of Ellesmere Island, for instance, Fletcher and colleagues have been mapping black layers of charcoal in sediments dating from the Pliocene. Each charcoal layer represents a fire that burned through the ancient forest. By tracking the events across Ellesmere and other nearby islands, Fletcher’s team discovered that fire was widespread across what is now the Canadian Arctic. Wildfires changed vegetation across the landscape, possibly altering how the Arctic responded to rising temperatures. Soot rising from the fires would have darkened the skies, potentially leading to local or regional weather changes. “How important is that to the warming?” asks Bette Otto-Bliesner, a paleoclimatologist at the National Center for Atmospheric Research in Boulder, Colo. “That’s something we’re still trying to determine.” Fletcher, Otto-Bliesner and colleagues described the charcoal discovery, along with modeling studies of the fires’ effects, in Seattle in October at a meeting of the Geological Society of America.
In 2012, about 283,280 square kilometers of forest burned in Russia. Three years later, more than 20,230 square kilometers burned in Alaska. Last summer, a wildfire broke out in the icy landscape of western Greenland. “We’re already seeing fire in the Arctic, which is unusual today,” Fletcher says. “But it wouldn’t have been unusual in the Pliocene.”
While the work doesn’t predict how much of the Arctic will burn as temperatures rise, the findings do suggest that people need to prepare for more fires in the future. Trapped ocean heat Scientists like Fletcher are the genealogists of the Pliocene, collecting records of past environments. Other researchers — the computer modelers — put those old records into broad context, like historians analyzing family trees for patterns of migration and change.
The modelers begin with data on Pliocene temperatures — such as how hot it got on Ellesmere Island or in the North Atlantic Ocean, as revealed by plant fossils or seafloor sediments. Scientists can also estimate how much CO2 was in the atmosphere at the time by looking at clues such as the density of holes in fossilized leaves of Pliocene plants, which used those openings to take up CO2. Estimates vary, but most suggest CO2 levels were about 350 to 450 ppm in the mid-Pliocene. It’s not clear what caused the gas buildup during the Pliocene; one possibility is it came from long-term changes in the way carbon cycles between the land, ocean and atmosphere. But no matter the source, the high levels of CO 2 caused temperatures to soar by trapping heat in the atmosphere. The Pliocene isn’t a perfect crystal ball for today. For starters, scientists know why CO2 levels are now increasing — burning of fossil fuels and other human activities (SN: 5/30/15, p. 15). As the Industrial Revolution was gaining steam, in the 19th century, atmospheric CO2 levels were around 280 ppm. Today that level is just above 400 ppm, and rising.
Modeling the Pliocene climate can help reveal how Earth responded in somewhat similar conditions. That means studying changes in the Pliocene atmosphere, the land surface and most of all the oceans, which absorb the bulk of planetary warming. “That’s the sort of thing you can understand from studying past warm episodes,” Ford says. “What was different about how heat and carbon were moving around in the ocean?”
Ford has begun working with climatologist Natalie Burls of George Mason University in Fairfax, Va., to try to track how the oceans’ major water masses shifted during the Pliocene. Today the North Atlantic has a deep, cold, salty layer that is crucial to the ocean’s “conveyor belt” circulation. In this pattern, warm waters flow northward from the tropics, then cool and become saltier and denser as they reach higher latitudes. That cool water sinks and travels southward, where it warms and rises and begins the cycle all over again.
This conveyor belt circulation is important to today’s Atlantic climate, because it allows the warm Gulf Stream to moderate temperatures from the U.S. East Coast to Western Europe. Burls and colleagues have now found that a similar pattern might have existed in the Pacific during the Pliocene. They call it the Pacific meridional overturning circulation, or PMOC, just as today’s similar Atlantic circulation is known as the AMOC.
Burls’ team discovered this phenomenon by modeling how the Pliocene ocean would have responded to higher temperatures. Because the Arctic was so warm, the temperature difference between the equator and the mid- and high latitudes was not as great as it is today. The weaker temperature gradient would have meant less rainfall and more evaporation in the midlatitude North Pacific. As a result, its uppermost waters would have gotten saltier. When the North Pacific waters got salty enough, they cooled and sank, setting up an enormous current that dove deep off the coast of northeastern Russia and traveled southward until the water warmed enough to once again rise toward the surface. Real-world data back the claim: Accumulations of calcium carbonate in deep-sea Pacific sediments show that the Pliocene ocean experienced huge shifts at the time, with waters churning all the way from the surface down to about three kilometers deep, as would be expected from a conveyor belt–type circulation. The team reported the finding in Science Advances in September.
What happened in the Pliocene Pacific may say something about the Pacific of the distant future, Burls says. As temperatures rise today, most of the heat is being taken up by the surface layers of the oceans. Over the short term, that works to prevent changes in deep ocean circulation. “Today we’re very quickly turning on the heating, and it will take a while for the deep ocean to adjust,” Burls says.
But in the longer term, thousands of years from now, waters in the North Pacific may eventually become warm and salty enough to establish a PMOC, just as there was in the Pliocene. And that could lead to major changes in weather and climate patterns around the globe.
Land bridges and ice sheets Other modelers are looking beyond the Pacific to improve their understanding of how different parts of the Pliocene world behaved. About a dozen research groups recently launched a new effort called PlioMIP2, or Pliocene Model Intercomparison Project, Phase 2, to model the climate of a time somewhat similar to today in the mid-Pliocene, about 3.205 million years ago.
“We’re working to produce the best picture that we can of what life seemed to be like at the time,” says Alan Haywood, a climate modeler at the University of Leeds in England and a leader of the effort.
In one discovery, project scientists have found that small changes in the geography of their modeled world make a big improvement in the final results. Early models did not accurately capture how much the polar regions heated up. So PlioMIP2 researchers updated their starting conditions. Instead of assuming that the landmasses of the Pliocene world were identical to today, the group made two small, plausible changes in the Arctic. The researchers made a land bridge between Russia and Alaska by closing the Bering Strait, and they added land to connect a few modern islands in the Canadian Arctic, including Ellesmere.
The change “seems small, but it actually can have a huge impact on climate,” says Otto-Bliesner. For instance, closing the Bering Strait cut off a flow in which relatively fresh water from the Pacific travels over the Arctic and into the North Atlantic. With the updated geography, the PlioMIP2 models suddenly did a much better job of simulating heat in the high Arctic.
Otto-Bliesner will describe the team’s results in New Orleans this month at a meeting of the American Geophysical Union. Another PlioMIP2 group, Deepak Chandan and Richard Peltier of the University of Toronto, reported similar findings in July in Climate of the Past. They too found that closing the Bering Strait allowed their model to better simulate the Arctic heating.
Other Pliocene modelers are trying to figure out how the planet’s enormous ice sheets in Greenland and Antarctica might respond to rising temperatures. Geologic evidence, such as ancient beaches from the Pliocene, suggest that global sea levels then were as much as 25 meters higher than today. If all of Greenland’s ice were to melt, global sea levels would rise about six meters; if all of Antarctica went, it would contribute about 60 meters. So parts of these ice sheets, but not all, must have melted during the long-ago warm period.
Several of the PlioMIP2 research groups are modeling how the polar ice sheets responded in the heat of the Pliocene. “It will tell us how much we should be worried,” Otto-Bliesner says. One new study suggests that the northern and southern ice sheets may have behaved out of phase with each other. In a simulation of the mid- to late Pliocene, climate modeler Bas de Boer of Utrecht University in the Netherlands and colleagues found that as Greenland’s ice melted, Antarctica’s ice could have been relatively stable, and vice versa.
“At different points, they could be contributing to the sea level story or against it,” says Haywood. He, along with colleagues, reported the results in the Oct. 30 Geophysical Research Letters.
That out-of-sync melting suggests the Pliocene was a complicated time. Just because global temperatures were high doesn’t mean that all of Earth’s ice sheets melted equally. (Today, both Greenland and West Antarctica are losing ice to the oceans as global temperatures rise.)
The Pliocene wound to an end around 2.6 million years ago, as CO2 levels dropped. Chemical reactions with eroding rocks may have sucked much of the CO2 out of the atmosphere and tucked it away in the oceans, removing the greenhouse gas. The planet entered a long-term cooling trend. Since the end of the Pliocene, Earth has been in and out of a series of ice ages.
But now, greenhouse gases are once again flooding into the atmosphere. Global temperatures are ticking up inexorably year after year. That makes the lessons of the past all the more relevant for the future.
Youngsters have probably been playing their way into cultural competence for at least tens of thousands of years. So why are signs of children largely absent from the archaeological record?
A cartoon that Biblical scholar Kristine Garroway taped up in her college dorm helps to explain kids’ invisibility at ancient sites: Two men in business suits stare intently at an unidentifiable round object sitting on a table. “Hey, what’s this?” asks the first guy. “I dunno, probably a toy … or a religious object,” says the second. Archaeologists have long tended to choose the second option, says Garroway, now a visiting scientist at Hebrew Union College–Jewish Institute of Religion in Los Angeles. Ambiguous finds, such as miniature pottery vessels and small figurines, get classified as ritual or decorative objects. Some of these artifacts undoubtedly were used in ceremonies. But not all of them, Garroway argues. Of 48 miniature clay vessels excavated from inside roughly 3,650- to 4,000-year-old houses at Israel’s Tel Nagila site, 10 retained fingerprints the size of children’s that were made during the shaping of soft clay, before the clay was heated and hardened, archaeologists reported in 2013. Kids must have made those somewhat unevenly shaped jars and bowls, each easily held within a child’s hand, concluded Joe Uziel of the Israel Antiquities Authority in Jerusalem and independent Israeli researcher Rona Avissar Lewis in Palestine Exploration Quarterly. Unusual finds in Israel dating to around 3,000 years ago also represent children’s early attempts to mimic adult craftwork, Garroway said in a November 18 presentation in Boston at the annual meeting of the American Schools of Oriental Research. Numerous rounded clay disks, each pierced with two holes, have mystified investigators for nearly a century. As early as 1928, an archaeologist suggested that these button-sized objects were toys. After passing a string through both of a disk’s holes and tying the ends together, a youngster could swing the string to wind up the toy and then pull both ends of the string to make the disk spin. Clay disks from six Israeli sites can be separated into those made by skilled artisans and others — featuring rough edges and unevenly spaced holes — made by novices, including children, Garroway proposes. If those items were toys, sloppy execution may have partly resulted from children’s impatience to play with the final product, she suspects.
Garroway’s proposal appears likely, especially in light of evidence that more than 10,000 years earlier, people in France and Spain made similar spinning disks decorated with animals that appeared to move as the toy twirled (SN: 6/30/12, p. 12), says archaeologist Michelle Langley of Griffith University in Brisbane, Australia.
Western European finds from as early as 14,000 to 21,000 years ago also may have gone unrecognized as children’s toys, Langley suggests in a paper published this month in the Oxford Journal of Archaeology. One specimen, a cave lion carved out of a reindeer’s antler, displays so much polish from handling that children may have played with the item for years, she says. Some bone spearpoints with broken tips bear signs of unskilled repair, suggesting adults gave the damaged weapons to children to practice bone-working skills and perhaps play with, she adds.
Protecting the anonymity of publicly available genetic data, including DNA donated to research projects, may be impossible.
About 60 percent of people of European descent who search genetic genealogy databases will find a match with a relative who is a third cousin or closer, a new study finds. The result suggests that with a database of about 3 million people, police or anyone else with access to DNA data can figure out the identity of virtually any American of European descent, Yaniv Erlich and colleagues report online October 11 in Science. Erlich, the chief science officer of the consumer genetic testing company MyHeritage, and colleagues examined his company’s database and that of the public genealogy site GEDMatch, each containing data from about 1.2 million people. Using DNA matches to relatives, along with family tree information and some basic demographic data, scientists estimate that they could narrow the identity of an anonymous DNA owner to just one or two people.
Recent cases identifying suspects in violent crimes through DNA searches of GEDMatch, such as the Golden State Killer case (SN Online: 4/29/18), have raised privacy concerns (SN Online: 6/7/18). And the same process used to find rape and murder suspects can also identify people who have donated anonymous DNA for genetic and medical research studies, the scientists say.
Genetic data used in research is stripped of information like names, ages and addresses, and can’t be used to identify individuals, government officials have said. But “that’s clearly untrue,” as Erlich and colleagues have demonstrated, says Rori Rohlfs, a statistical geneticist at San Francisco State University, who was not involved in the study.
Using genetic genealogy techniques that mirror searches for the Golden State Killer and suspects in at least 15 other criminal cases, Erlich’s team identified a woman who participated anonymously in the 1000 Genomes project. That project cataloged genetic variants in about 2,500 people from around the world. Erlich’s team pulled the woman’s anonymous data from the publicly available 1000 Genomes database. The researchers then created a DNA profile similar to the ones generated by consumer genetic testing companies such as 23andMe and AncestryDNA (SN: 6/23/18, p.14) and uploaded that profile to GEDMatch.
A search turned up matches with two distant cousins, one from North Dakota and one from Wyoming. The cousins also shared DNA indicating that they had a common set of ancestors four to six generations ago. Building on some family tree information already collected by those cousins, researchers identified the ancestral couple and filled in hundreds of their descendants, looking for a woman who matched the age and other publicly available demographic data of the 1000 Genomes participant.
It took a day to find the right person.
That example suggests scientists that need to reconsider whether they can guarantee research participants anonymity if genetic data are publicly shared, Rohlfs says.
In reality, though, identifying a person from a DNA match with a distant relative is much harder than it appears, and requires a lot of expertise and gumshoe work, Ellen Greytak says. She is the director of bioinformatics at Parabon NanoLabs, a company in Reston, Va., that has helped close at least a dozen criminal cases since May using genetic genealogy searches. “The gulf between a match and identification is absolutely massive,” she says.
The company has also found that people of European descent often have DNA matches to relatives in GEDMatch. But tracking down a single suspect from those matches is often confounded by intermarriages, adoptions, aliases, cases of misidentified or unknown parentage and other factors, says CeCe Moore, a genealogist who spearheads Parabon’s genetic genealogy service.
“The study demonstrates the power of genetic genealogy in a theoretical way,” Moore says, “but doesn’t fully capture the challenges of the work in practice.” For instance, Erlich and colleagues already had some family tree information from the 1000 Genome woman’s relatives, “so they had a significant head start.”
Erlich’s example might be an oversimplification, Rohlfs says. The researchers made rough estimates and assumptions that are not perfect, but the conclusion is solid, she says. “Their work is approximate, but totally reasonable.” And that conclusion that almost anyone can be identified from DNA should spark public discussion about how DNA data should be used for law enforcement and research, she says.
Ross D.E. MacPhee and Peter Schouten (illustrator) W.W. Norton & Co., $35
Today’s land animals are a bunch of runts compared with creatures from the not-too-distant past. Beasts as big as elephants, gorillas and bears were once much more common around the world. Then, seemingly suddenly, hundreds of big species, including the woolly mammoth, the giant ground sloth and a lizard weighing as much as half a ton, disappeared. In End of the Megafauna, paleomammalogist Ross MacPhee makes one thing clear: The science on what caused the extinctions of these megafauna — animals larger than 44 kilograms, or about 100 pounds — is far from settled. MacPhee dissects the evidence behind two main ideas: that as humans moved into new parts of the world over the last 50,000 years, people hunted the critters into oblivion, or that changes in climate left the animals too vulnerable to survive. As MacPhee shows, neither scenario matches all of the available data.
Throughout, Peter Schouten’s illustrations, reminiscent of paintings that enliven natural history museums, bring the behemoths back to life. At times, MacPhee slips in too many technical terms. But overall, he offers readers an informative, up-to-date overview of a fascinating period in Earth’s history.
Buy End of the Megafauna from Amazon.com. Science News is a participant in the Amazon Services LLC Associates Program. Please see our FAQ for more details.
Off a gravel road at the edge of a college campus — next door to the town’s holding pen for stray dogs — is a busy test site for the newest technologies in drinking water treatment.
In the large shed-turned-laboratory, University of Massachusetts Amherst engineer David Reckhow has started a movement. More people want to use his lab to test new water treatment technologies than the building has space for.
The lab is a revitalization success story. In the 1970s, when the Clean Water Act put new restrictions on water pollution, the diminutive grey building in Amherst, Mass. was a place to test those pollution-control measures. But funding was fickle, and over the years, the building fell into disrepair. In 2015, Reckhow brought the site back to life. He and a team of researchers cleaned out the junk, whacked the weeds that engulfed the building and installed hundreds of thousands of dollars worth of monitoring equipment, much of it donated or bought secondhand.
“We recognized that there’s a lot of need for drinking water technology,” Reckhow says. Researchers, students and start-up companies all want access to test ways to disinfect drinking water, filter out contaminants or detect water-quality slipups. On a Monday afternoon in October, the lab is busy. Students crunch data around a big table in the main room. Small-scale tests of technology that uses electrochemistry to clean water chug along, hooked up to monitors that track water quality. On a lab bench sits a graduate student’s low-cost replica of an expensive piece of monitoring equipment. The device alerts water treatment plants when the by-products of disinfection chemicals in a water supply are reaching dangerous levels. In an attached garage, two startup companies are running larger-scale tests of new kinds of membranes that filter out contaminants. Parked behind the shed is the almost-ready-to-roll newcomer. Starting in 2019, the Mobile Water Innovation Laboratory will take promising new and affordable technologies to local communities for testing. That’s important, says Reckhow, because there’s so much variety in the quality of water that comes into drinking water treatment plants. On-site testing is the only way to know whether a new approach is effective, he says, especially for newer technologies without long-term track records.
The facility’s popularity reflects a persistent concern in the United States: how to ensure affordable access to clean, safe drinking water. Although U.S. drinking water is heavily regulated and pretty clean overall, recent high-profile contamination cases, such as the 2014 lead crisis in Flint, Mich. (SN: 3/19/16, p. 8), have exposed weaknesses in the system and shaken people’s trust in their tap water. Tapped out In 2013 and 2014, 42 drinking water–associated outbreaks resulted in more than 1,000 illnesses and 13 deaths, based on reports to the U.S. Centers for Disease Control and Prevention. The top culprits were Legionella bacteria and some form of chemical, toxin or parasite, according to data published in November 2017.
Those numbers tell only part of the story, however. Many of the contaminants that the U.S. Environmental Protection Agency regulates through the 1974 Safe Drinking Water Act cause problems only when exposure happens over time; the effects of contaminants like lead don’t appear immediately after exposure. Records of EPA rule violations note that in 2015, 21 million people were served by drinking water systems that didn’t meet standards, researchers reported in a February study in the Proceedings of the National Academy of Sciences. That report tracked trends in drinking water violations from 1982 to 2015. Current technology can remove most contaminants, says David Sedlak, an environmental engineer at the University of California, Berkeley. Those include microbes, arsenic, nitrates and lead. “And then there are some that are very difficult to degrade or transform,” such as industrial chemicals called PFAS.
Smaller communities, especially, can’t always afford top-of-the-line equipment or infrastructure overhauls to, for example, replace lead pipes. So Reckhow’s facility is testing approaches to help communities address water-quality issues in affordable ways. Some researchers are adding technologies to deal with new, potentially harmful contaminants. Others are designing approaches that work with existing water infrastructure or clean up contaminants at their source.
How is your water treated? A typical drinking water treatment plant sends water through a series of steps.
First, coagulants are added to the water. These chemicals clump together sediments, which can cloud water or make it taste funny, so they are bigger and easier to remove. A gentle shaking or spinning of the water, called flocculation, helps those clumps form (1). Next, the water flows into big tanks to sit for a while so the sediments can fall to the bottom (2). The cleaner water then moves through membranes that filter out smaller contaminants (3). Disinfection, via chemicals or ultraviolet light, kills harmful bacteria and viruses (4). Then the water is ready for distribution (5). There’s a lot of room for variation within that basic water treatment process. Chemicals added at different stages can trigger reactions that break down chunky, toxic organic molecules into less harmful bits. Ion-exchange systems that separate contaminants by their electric charge can remove ions like magnesium or calcium that make water “hard,” as well as heavy metals, such as lead and arsenic, and nitrates from fertilizer runoff. Cities mix and match these strategies, adjusting chemicals and prioritizing treatment components, based on the precise chemical qualities of the local water supply.
Some water utilities are streamlining the treatment process by installing technologies like reverse osmosis, which removes nearly everything from the water by forcing the water molecules through a selectively permeable membrane with extremely tiny holes. Reverse osmosis can replace a number of steps in the water treatment process or reduce the number of chemicals added to water. But it’s expensive to install and operate, keeping it out of reach for many cities.
Fourteen percent of U.S. residents get water from wells and other private sources that aren’t regulated by the Safe Drinking Water Act. These people face the same contamination challenges as municipal water systems, but without the regulatory oversight, community support or funding.
“When it comes to lead in private wells … you’re on your own. Nobody is going to help you,” says Marc Edwards, the Virginia Tech engineer who helped uncover the Flint water crisis. Edwards and Virginia Tech colleague Kelsey Pieper collected water-quality data from over 2,000 wells across Virginia in 2012 and 2013. Some were fine, but others had lead levels of more than 100 parts per billion. When levels are higher than its 15 ppb threshold, the EPA mandates that cities take steps to control corrosion and notify the public about the contamination. The researchers reported those findings in 2015 in the Journal of Water and Health.
To remove lead and other contaminants, well users often rely on point-of-use treatments. A filter on the tap removes most, but not all, contaminants. Some people spring for costly reverse osmosis systems. New tech solutions These three new water-cleaning approaches wouldn’t require costly infrastructure overhauls.
Ferrate to cover many bases Reckhow’s team at UMass Amherst is testing ferrate, an ion of iron, as a replacement for several water treatment steps. First, ferrate kills bacteria in the water. Next, it breaks down carbon-based chemical contaminants into smaller, less harmful molecules. Finally, it makes ions like manganese less soluble in water so they are easier to filter out, Reckhow and colleagues reported in 2016 in Journal–American Water Association. With its multifaceted effects, ferrate could potentially streamline the drinking water treatment process or reduce the use of chemicals, such as chlorine, that can yield dangerous by-products, says Joseph Goodwill, an environmental engineer at the University of Rhode Island in Kingston.
Ferrate could be a useful disinfectant for smaller drinking water systems that don’t have the infrastructure, expertise or money to implement something like ozone treatment, an approach that uses ozone gas to break down contaminants, Reckhow says.
Early next year, in the maiden voyage of his mobile water treatment lab, Reckhow plans to test the ferrate approach in the small Massachusetts town of Gloucester. In the 36-foot trailer is a squeaky-clean array of plastic pipes and holding tanks. The setup routes incoming water through the same series of steps — purifying, filtering and disinfecting — that one would find in a standard drinking water treatment plant. With two sets of everything, scientists can run side-by-side experiments, comparing a new technology’s performance against the standard approach. That way researchers can see whether a new technology works better than existing options, says Patrick Wittbold, the UMass Amherst research engineer who headed up the trailer’s design.
Charged membranes Filtering membranes tend to get clogged with small particles. “That’s been the Achilles’ heel of membrane treatment,” says Brian Chaplin, an engineer at the University of Illinois at Chicago. Unclogging the filter wastes energy and increases costs. Electricity might solve that problem and offer some side benefits, Chaplin suggests.
His team tested an electrochemical membrane made of titanium oxide or titanium dioxide that both filters water and acts as an electrode. Chemical reactions happening on the electrically charged membranes can turn nitrates into nitrogen gas or split water molecules, generating reactive ions that can oxidize contaminants in the water. The reactions also prevent particles from sticking to the membrane. Large carbon-based molecules like benzene become smaller and less harmful. In lab tests, the membranes effectively filtered and destroyed contaminants, Chaplin says. In one test, a membrane transformed 67 percent of the nitrates in a solution into other molecules. The finished water was below the EPA’s regulatory nitrate limit of 10 parts per million, he and colleagues reported in July in Environmental Science and Technology. Chaplin expects to move the membrane into pilot tests within the next two years.
Obliterate the PFAS The industrial chemicals known as PFAS present two challenges. Only the larger ones are effectively removed by granular activated carbon, the active material in many household water filters. The smaller PFAS remain in the water, says Christopher Higgins, an environmental engineer at the Colorado School of Mines in Golden. Plus, filtering isn’t enough because the chunky chemicals are hard to break down for safe disposal.
Higgins and colleague Timothy Strathmann, also at the Colorado School of Mines, are working on a process to destroy PFAS. First, a specialized filter with tiny holes grabs the molecules out of the water. Then, sulfite is added to the concentrated mixture of contaminants. When hit with ultraviolet light, the sulfite generates reactive electrons that break down the tough carbon-fluorine bonds in the PFAS molecules. Within 30 minutes, the combination of UV radiation and sulfites almost completely destroyed one type of PFAS, other researchers reported in 2016 in Environmental Science and Technology.
Soon, Higgins and Strathmann will test the process at Peterson Air Force Base in Colorado, one of nearly 200 U.S. sites known to have groundwater contaminated by PFAS. Cleaning up those sites would remove the pollutants from groundwater that may also feed wells or city water systems.
As the asteroid Bennu comes into sharper focus, planetary scientists are seeing signs of water locked up in the asteroid’s rocks, NASA team members announced December 10.
“It’s one of the things we were hoping to find,” team member Amy Simon of NASA’s Goddard Space Flight Center in Greenbelt, Md., said in a news conference at the American Geophysical Union meeting in Washington, D.C. “This is evidence of liquid water in Bennu’s past. This is really big news.” NASA’s OSIRIS-REx spacecraft just arrived at Bennu on December 3 (SN Online: 12/3/18). Over the next year, the team will search for the perfect spot on the asteroid to grab a handful of dust and return it to Earth. “Very early in the mission, we’ve found out Bennu is going to provide the type of material we want to return,” said principal investigator Dante Lauretta of the University of Arizona in Tucson. “It definitely looks like we’ve gone to the right place.”
OSIRIS-REx’s onboard spectrometers measure the chemical signatures of various minerals based on the wavelengths of light they emit and absorb. The instruments were able to see signs of hydrated minerals on Bennu’s surface about a month before the spacecraft arrived at the asteroid, and the signal has remained strong all over the asteroid’s surface as the spacecraft approached, Simon said. Those minerals can form only in the presence of liquid water, and suggest that Bennu had a hydrothermal system in its past.
Bennu’s surface is also covered in more boulders and craters than the team had expected based on observations of the asteroid taken from Earth. Remote observations led the team to expect a few large boulders, about 10 meters wide. Instead they see hundreds, some of them up to 50 meters wide.
“It’s a little more rugged of an environment,” Lauretta said. But that rough surface can reveal details of Bennu’s internal structure and history. If Bennu were one solid mass, for instance, a major impact could crack or shatter its entire surface. The fact that it has large craters means it has survived impacts intact. It may be more of a rubble pile loosely held together by its own gravity. The asteroid’s density supports the rubble pile idea. OSIRIS-REx’s first estimate of Bennu’s density shows it is about 1,200 kilograms per cubic meter, Lauretta said. The average rock is about 3,000 kilograms per cubic meter. The hydrated minerals go some way towards lowering the asteroid’s density, since water is less dense than rock. But up to 40 percent of the asteroid may be full of caves and voids as well, Lauretta said.
Some of the rocks on the surface appear to be fractured in a spindly pattern. “If you drop a dinner plate on the ground, you get a spider web of fractures,” says team member Kevin Walsh of the Southwest Research Institute in Boulder, Colo. “We’re seeing this in some boulders.”
The boulders may have cracked in response to the drastic change in temperatures they experience as the asteroid spins. Studying those fracture patterns in more detail will reveal the properties of the rocks.
The OSIRIS-REx team also needs to know how many boulders of various sizes are strewn across the asteroid’s surface. Any rock larger than about 20 centimeters across would pose a hazard to the spacecraft’s sampling arm, says Keara Burke of the University of Arizona. Burke, an undergraduate engineering student, is heading up a boulder mapping project. “My primary goal is safety,” she says. “If it looks like a boulder to me, within reasonable guidelines, then I mark it as a boulder. We can’t sample anything if we’re going to crash.”
The team also needs to know where the smallest grains of rock and dust are, as OSIRIS-REx’s sampling arm can pick up grains only about 2 centimeters across. One way to find the small rocks is to measure how well the asteroid’s surface retains heat. Bigger rocks are slower to heat up and slower to cool down, so they’ll radiate heat out into space even on the asteroid’s night side. Smaller grains of dust heat up and cool down much more quickly.
“It’s exactly like a beach,” Walsh says. “During the day it’s scalding hot, but then it’s instantly cold when the sun sets.”
Measurements of the asteroid’s heat storage so far suggest that there are regions with grains as small as 1 or 2 centimeters across, Lauretta said, though it is still too early to be certain.
“I am confident that we’ll find some fine-grained regions,” Lauretta said. Some may be located inside craters. The challenge will be finding an area wide enough that the spacecraft’s navigation system can steer to it accurately.
The results are in: Ultima Thule, the distant Kuiper Belt object that got a close visit from the New Horizons spacecraft on New Year’s Day, looks like two balls stuck together.
“What you are seeing is the first contact binary ever explored by a spacecraft, two separate objects that are now joined together,” principal investigator Alan Stern of the Southwest Research Institute in Boulder, Colo., said January 2 in a news conference held at the Johns Hopkins University Applied Physics Laboratory in Laurel, Md.
“It’s a snowman, if it’s anything at all,” Stern said. (Twitter was quick to supply another analogy: the rolling BB-8 droid from Star Wars.)
That shape is enough to lend credence to the idea that planetary bodies grow up by the slow clumping of small rocks. Ultima Thule, whose official name is 2014 MU69, is thought to be among the oldest and least-altered objects in the solar system, so knowing how it formed can reveal how planets formed in general (SN Online: 12/18/18). “Think of New Horizons as a time machine … that has brought us back to the very beginning of solar system history, to a place where we can observe the most primordial building blocks of the planets,” said Jeff Moore of NASA’s Ames Research Center in Moffett Field, Calif., who leads New Horizons’ geology team. “It’s gratifying to see these perfectly formed contact binaries in their native habitat. Our ideas of how these things form seem to be somewhat vindicated by these observations.”
The view from about 28,000 kilometers away shows that MU69 is about 33 kilometers long and has two spherical lobes, one about three times the size of the other. The spheres are connected by a narrow “neck” that appears brighter than much of the rest of the surface. That could be explained by small grains of surface material rolling downhill to settle in the neck, because small grains tend to reflect more light than large ones, said New Horizons deputy project scientist Cathy Olkin of the Southwest Research Institute. Even the brightest areas reflected only about 13 percent of the sunlight that hit them, though. The darkest reflected just 6 percent, about the same brightness as potting soil.
Measurements also show that MU69 rotates once every 15 hours, give or take one hour. That’s a Goldilocks rotation speed, Olkin said. If it spun too fast, MU69 would break apart; too slow would be hard to explain for such a small body. Fifteen hours is just right.
The lobes’ spherical shape is best explained by collections of small rocks glomming together to form larger rocks, Moore said. The collisions between the rocks happened at extremely slow speeds, so the rocks accreted rather than breaking each other apart. The final collision was between the two spheres, which the team dubbed “Ultima” (the bigger one) and “Thule” (the smaller one). That collision probably happened at no more than a few kilometers per hour, “the speed at which you might park your car in a parking space,” Moore said. “If you had a collision with another car at those speeds, you may not even bother to fill out the insurance forms.”
New Horizons also picked up MU69’s reddish color. The science team thinks the rusty hue comes from radiation altering exotic ice, frozen material like methane or nitrogen rather than water, although they don’t know exactly what that ice is made of yet.
The spacecraft is still sending data back to Earth, and will continue transmitting details of the flyby for the next 18 months. Even as the New Horizons team members shared the first pictures from the spacecraft’s flyby, data was arriving that will reveal details of MU69’s surface composition.
“The real excitement today is going to be in the composition team room,” Olkin said. “There’s no way to make anything like this type of observation without having a spacecraft there.”