What happens in Earth’s atmosphere during an eclipse?

As the moon’s shadow races across North America on August 21, hundreds of radio enthusiasts will turn on their receivers — rain or shine. These observers aren’t after the sun. They’re interested in a shell of electrons hundreds of kilometers overhead, which is responsible for heavenly light shows, GPS navigation and the continued existence of all earthly beings.

This part of the atmosphere, called the ionosphere, absorbs extreme ultraviolet radiation from the sun, protecting life on the ground from its harmful effects. “The ionosphere is the reason life exists on this planet,” says physicist Joshua Semeter of Boston University.
It’s also the stage for brilliant displays like the aurora borealis, which appears when charged material in interplanetary space skims the atmosphere. And the ionosphere is important for the accuracy of GPS signals and radio communication.

This layer of the atmosphere forms when radiation from the sun strips electrons from, or ionizes, atoms and molecules in the atmosphere between about 75 and 1,000 kilometers above Earth’s surface. That leaves a zone full of free-floating negatively charged electrons and positively charged ions, which warps and wefts signals passing through it.
Without direct sunlight, though, the ionosphere stops ionizing. Electrons start to rejoin the atoms and molecules they abandoned, neutralizing the atmosphere’s charge. With fewer free electrons bouncing around, the ionosphere reflects radio waves differently, like a distorted mirror.
We know roughly how this happens, but not precisely. The eclipse will give researchers a chance to examine the charging and uncharging process in almost real time.

“The eclipse lets us look at the change from light to dark to light again very quickly,” says Jill Nelson of George Mason University in Fairfax, Va.

Joseph Huba and Douglas Drob of the U.S. Naval Research Laboratory in Washington, D.C., predicted some of what should happen to the ionosphere in the July 17 Geophysical Research Letters. At higher altitudes, the electrons’ temperature should decrease by 15 percent. Between 150 and 350 kilometers above Earth’s surface, the density of free-floating electrons should drop by a factor of two as they rejoin atoms, the researchers say. This drop in free-floating electrons should create a disturbance that travels along Earth’s magnetic field lines. That echo of the eclipse-induced ripple in the ionosphere may be detectable as far away as the tip of South America.

Previous experiments during eclipses have shown that the degree of ionization doesn’t simply die down and then ramp back up again, as you might expect. The amount of ionization you see seems to depend on how far you are from being directly in the moon’s shadow.

For a project called Eclipse Mob, Nelson and her colleagues will use volunteers around the United States to gather data on how the ionosphere responds when the sun is briefly blocked from the largest land area ever.
About 150 Eclipse Mob participants received a build-it-yourself kit for a small radio receiver that plugs into the headphone jack of a smartphone. Others made their own receivers after the project ran out of kits. On August 21, the volunteers will receive signals from radio transmitters and record the signal’s strength before, during and after the eclipse.
Nelson isn’t sure what to expect in the data, except that it will look different depending on where the receivers are. “We’ll be looking for patterns,” she says. “I don’t know what we’re going to see.”

Semeter and his colleagues will be looking for the eclipse’s effect on GPS signals. They would also like to measure the eclipse’s effects on the ionosphere using smartphones — eventually.

For this year’s solar eclipse, they will observe radio signals using an existing network of GPS receivers in Missouri, and intersperse it with small, cheap GPS receivers that are similar to the kind in most phones. The eclipse will create a big cool spot, setting off waves in the atmosphere that will propagate away from the moon’s shadow. Such waves leave an imprint on the ionosphere that affects GPS signals. The team hopes to combine high-quality data with messier data to lay the groundwork for future experiments to tap into the smartphone crowd.

“The ultimate vision of this project is to leverage all 2 billion smartphones around the planet,” Semeter says. Someday, everyone with a phone could be a node in a global telescope.

If it works, it could be a lifesaver. Similar atmospheric waves were seen radiating from the source of the 2011 earthquake off the coast of Japan (SN Online: 6/16/11). “The earthquake did the sort of thing the eclipse is going to do,” Semeter says. Understanding how these waves form and move could potentially help predict earthquakes in the future.

Does the corona look different when solar activity is high versus when it’s low?

Carbondale, Ill., is just a few kilometers north of the point where this year’s total solar eclipse will linger longest — the city will get two minutes and 38 seconds of total darkness when the moon blocks out the sun. And it’s the only city in the United States that will also be in the path of totality when the next total solar eclipse crosses North America, in 2024 (SN: 8/5/17, p. 32). The town is calling itself the Eclipse Crossroads of America.
“Having a solar eclipse that goes through the entire continent is rare enough,” says planetary scientist Padma Yanamandra-Fisher of the Space Science Institute’s branch in Rancho Cucamonga, Calif. “Having two in seven years is even more rare. And two going through the same city is rarer still.”

That makes Carbondale the perfect spot to investigate how the sun’s atmosphere, or corona, looks different when solar activity is high versus low.

Every 11 years or so, the sun cycles from periods of high magnetic field activity to low activity and back again. The frequency of easy-to-see features — like sunspots on the sun’s visible surface, solar flares and the larger eruptions of coronal mass ejections — cycles, too. But it has been harder to trace what happens to the corona’s streamers, the long wispy tendrils that give the corona its crownlike appearance and originate from the magnetic field.
The corona is normally invisible from Earth, because the bright solar disk washes it out. Even space telescopes that are trained on the sun can’t see the inner part of the corona — they have to block some of it out for their own safety (SN Online: 8/11/17). So solar eclipses are the only time researchers can get a detailed view of what the inner corona, where the streamers are rooted, is up to.
Right now, the sun is in a period of exceptionally low activity. Even at the most recent peak in 2014, the sun’s number of flares and sunspots was pathetically wimpy (SN: 11/2/13, p. 22). During the Aug. 21 solar eclipse, solar activity will still be on the decline. But seven years from now during the 2024 eclipse, it will be on the upswing again, nearing its next peak.

Yanamandra-Fisher will be in Carbondale for both events. This year, she’s teaming up with a crowdsourced eclipse project called the Citizen Continental-America Telescope Eclipse experiment. Citizen CATE will place 68 identical telescopes along the eclipse’s path from Oregon to South Carolina.

As part of a series of experiments, Yanamandra-Fisher and her colleagues will measure the number, distribution and extent of streamers in the corona. Observations of the corona during eclipses going back as far as 1867 suggest that streamers vary with solar activity. During low activity, they tend to be more squat and concentrated closer to the sun’s equator. During high activity, they can get more stringy and spread out.

Scientists suspect that’s because as the sun ramps up its activity, its strengthening magnetic field lets the streamers stretch farther out into space. The sun’s equatorial magnetic field also splits to straddle the equator rather than encircle it. That allows streamers to spread toward the poles and occupy new space.

Although physicists have been studying the corona’s changes for 150 years, that’s still only a dozen or so solar cycles’ worth of data. There is plenty of room for new observations to help decipher the corona’s mysteries. And Yanamandra-Fisher’s group might be the first to collect data from the same point on Earth.

“This is pure science that can be done only during an eclipse,” Yanamandra-Fisher says. “I want to see how the corona changes.”

Scientists create the most cubic form of ice crystals yet

Cube-shaped ice is rare, at least at the microscopic level of the ice crystal. Now researchers have coaxed typically hexagonal 3-D ice crystals to form the most cubic ice ever created in the lab.

Cubed ice crystals — which may exist naturally in cold, high-altitude clouds — could help improve scientists’ understanding of clouds and how they interact with Earth’s atmosphere and sunlight, two interactions that influence climate.

Engineer Barbara Wyslouzil of Ohio State University and colleagues made the cubed ice by shooting nitrogen and water vapor through nozzles at supersonic speeds. The gas mixture expanded and cooled, and then the vapor formed nanodroplets. Quickly cooling the droplets further kept them liquid at normally freezing temperatures. Then, at around –48° Celsius, the droplets froze in about one millionth of a second.

The low-temperature quick freeze allowed the cubic ice to form, the team reports in the July 20 Journal of Physical Chemistry Letters. The crystals weren’t perfect cubes but were about 80 percent cubic. That’s better than previous studies, which made ice that was 73 percent cubic.

Fiery re-creations show how Neandertals could have easily made tar

Neandertals took stick-to-itiveness to a new level. Using just scraps of wood and hot embers, our evolutionary cousins figured out how to make tar, a revolutionary adhesive that they used to make formidable spears, chopping tools and other implements by attaching sharp-edged stones to handles, a new study suggests.

Researchers already knew that tar-coated stones date to at least 200,000 years ago at Neandertal sites in Europe, well before the earliest known evidence of tar production by Homo sapiens, around 70,000 years ago in Africa. Now, archaeologist Paul Kozowyk of Leiden University in the Netherlands and colleagues have re-created the methods that these extinct members of the human genus could have used to produce tar.
Three straightforward techniques could have yielded enough adhesive for Neandertals’ purposes, Kozowyk’s team reports August 31 in Scientific Reports. Previous studies have found that tar lumps found at Neandertal sites derive from birch bark. Neandertal tar makers didn’t need ceramic containers such as kilns and didn’t have to heat the bark to precise temperatures, the scientists conclude.
These findings fuel another burning question about Neandertals: whether they had mastered the art of building and controlling a fire. Some researchers suspect that Neandertals had specialized knowledge of fire control and used it to make adhesives; others contend that Neandertals only exploited the remnants of wildfires. The new study suggests they could have invented low-tech ways to make tar with fires, but it’s not clear whether those fires were intentionally lit.

“This new paper demystifies the prehistoric development of birch-bark tar production, showing that it was not predicated on advanced cognitive or technical skills but on knowledge of familiar, readily available materials,” says archaeologist Daniel Adler of the University of Connecticut in Storrs, who did not participate in the study.
Kozowyk’s group tested each of three tar-making techniques between five and 11 times. The lowest-tech approach consisted of rolling up a piece of birch bark, tying it with wood fiber and covering it in a mound of ashes and embers from a wood fire. Tar formed between bark layers and was scraped off the unrolled surface. The experimenters collected up to about one gram of tar this way.

A second strategy involved igniting a roll of birch bark at one end and placing it in a small pit. In some cases, embers were placed on top of the bark. The researchers either scraped tar off bark layers or collected it as it dripped onto a rock, strip of bark or a piece of bark folded into a cup. The most tar gathered with this method, about 1.8 grams, was in a trial using a birch-bark cup placed beneath a bark roll with its lit side up and covered in embers.

Repeating either the ash-mound or pit-roll techniques once or twice would yield the relatively small quantity of tar found at one Neandertal site in Europe, the researchers say. Between six and 11 repetitions would produce a tar haul equal to that previously unearthed at another European site.

In a third technique, the scientists placed a birch-bark vessel for collecting tar into a small pit. They placed a layer of twigs across the top of the pit and placed pebbles on top, then added a large, loose bark roll covered in a dome-shaped coat of wet soil. A fire was then lit on the earthen structure. This method often failed to produce anything. But after some practice with the technique, one trial resulted in 15.7 grams of tar — enough to make a lump comparable in size to the largest chunks found at Neandertal sites.

An important key to making tar was reaching the right heat level. Temperatures inside bark rolls, vessels, fires and embers varied greatly, but at some point each procedure heated bark rolls to between around 200˚ and 400˚ Celsius, Kozowyk says. In that relatively broad temperature range, tar can be produced from birch bark, he contends.

If they exploited naturally occurring fires, Neandertal tar makers had limited time and probably relied on a simple technique such as ash mounds, Kozowyk proposes. If Neandertals knew how to start and maintain fires, they could have pursued more complex approaches.

Some researchers say that excavations point to sporadic use of fire by Neandertals, probably during warm, humid months when lightning strikes ignited wildfires. But other investigators contend that extinct Homo species, including Neandertals, built campfires (SN: 5/5/12, p. 18).

Whatever the case, Kozowyk says, “Neandertals could have invented tar with only basic knowledge of fire and birch bark.”

Why bats crash into windows

Walls can get the best of clumsy TV sitcom characters and bats alike.

New lab tests suggest that smooth, vertical surfaces fool some bats into thinking their flight path is clear, leading to collisions and near misses.

The furry fliers famously use sound to navigate — emitting calls and tracking the echoes to hunt for prey and locate obstacles. But some surfaces can mess with echolocation.

Stefan Greif of the Max Planck Institute for Ornithology in Seewiesen, Germany, and colleagues put bats to the test in a flight tunnel. Nineteen of 21 greater mouse-eared bats (Myotis myotis) crashed into a vertical metal plate at least once, the scientists report in the Sept. 8 Science. In some crashes, bats face-planted without even trying to avoid the plate.
Smooth surfaces act as acoustic mirrors, the team says: Up close, they reflect sound at an angle away from the bat, producing fuzzier, harder-to-read echoes than rough surfaces do. From farther away, smooth surfaces don’t produce any echoes at all.

Infrared camera footage of wild bat colonies showed that vertical plastic plates trick bats in more natural settings, too.

Crash reel
This video shows three experiments into how smooth surfaces affect bat flight. In one lab test, a vertical metal plate gave a bat the illusion of a clear flight path, causing it to crash into the barrier. In a second lab test, a horizontal metal plate created the illusion of water; the bat dips to surface to take a sip. Finally, near a natural bat colony, a bat collides with a vertically hung plastic plate, showing that smooth surfaces could impact bats in the wild, as well.

We’re probably undervaluing healthy lakes and rivers

For sale: Pristine lake. Price negotiable.

Most U.S. government attempts to quantify the costs and benefits of protecting the country’s bodies of water are likely undervaluing healthy lakes and rivers, researchers argue in a new study. That’s because some clean water benefits get left out of the analyses, sometimes because these benefits are difficult to pin numbers on. As a result, the apparent value of many environmental regulations is probably discounted.

The study, published online October 8 in the Proceedings of the National Academy of Sciences, surveyed 20 government reports analyzing the economic impacts of U.S. water pollution laws. Most of these laws have been enacted since 2000, when cost-benefit analyses became a requirement. Analysis of a measure for restricting river pollution, for example, might find that it increases costs for factories using that river for wastewater disposal, but boosts tourism revenues by drawing more kayakers and swimmers.
Only two studies out of 20 showed the economic benefits of these laws exceeding the costs. That’s uncommon among analyses of environmental regulations, says study coauthor David Keiser, an environmental economist at Iowa State University in Ames. Usually, the benefits exceed the costs.

So why does water pollution regulation seem, on paper at least, like such a losing proposition?

Keiser has an explanation: Summing up the monetary benefits of environmental policies is really hard. Many of these benefits are intangible and don’t have clear market values. So deciding which benefits to count, and how to count them, can make a big difference in the results.
Many analyses assume water will be filtered for drinking, Keiser says, so they don’t count the human health benefits of clean lakes and rivers (SN: 8/18/18, p. 14). That’s different from air pollution cost-benefit studies, which generally do include the health benefits of cleaner air by factoring in data tracking things like doctor’s visits or drug prescriptions. That could explain why Clean Air Act rules tend to get more favorable reviews, Keiser says — human health accounts for about 95 percent of the measured benefits of air quality regulations.

“You can avoid a lake with heavy, thick, toxic algal blooms,” Keiser says. “If you walk outside and have very polluted air, it’s harder to avoid.”

But even if people can avoid an algae-choked lake, they still pay a price for that pollution, says environmental scientist Thomas Bridgeman, director of the Lake Erie Center at the University of Toledo in Ohio.
Communities that pull drinking water from a lake filled with toxic blooms of algae or cyanobacteria spend more to make the water safe to drink. Bridgeman’s seen it firsthand: In 2014, Lake Erie’s cyanobacteria blooms from phosphorus runoff shut down Toledo’s water supply for two days and forced the city to spend $500 million on water treatment upgrades.

Most of the studies surveyed by Keiser and his team were missing other kinds of benefits, too. The reports usually left out the value of eliminating certain toxic and nonconventional pollutants — molecules such as bisphenol A, or BPA, and perfluorooctanoic acid, or PFOA (SN: 10/3/15, p. 12). In high quantities, these compounds, which are used to make some plastics and nonstick coatings, can cause harm to humans and wildlife. Many studies also didn’t include discussion of how the quality of surface waters can affect groundwater, which is a major source of drinking water for many people.

A lack of data on water quality may also limit studies, Keiser’s team suggests. While there’s a national database tracking daily local air pollution levels, the data from various water quality monitoring programs aren’t centralized. That makes gathering and evaluating trends in water quality harder.

Plus, there are the intangibles — the value of aquatic species that are essential to the food chain, for example.
“Some things are just inherently difficult to put a dollar [value] on,” says Robin Craig, an environmental law professor at the University of Utah in Salt Lake City. “What is it worth to have a healthy native ecosystem?… That’s where it can get very subjective very fast.”

That subjectivity can allow agencies to analyze policies in ways that suit their own political agendas, says Matthew Kotchen, an environmental economist at Yale University. An example: the wildly different assessments by the Obama and Trump administrations of the value gained from the 2015 Clean Water Rule, also known as the Waters of the United States rule.

The rule, passed under President Barack Obama, clarified the definition of waters protected under the 1972 Clean Water Act to include tributaries and wetlands connected to larger bodies of water. The Environmental Protection Agency estimated in 2015 that the rule would result in yearly economic benefits ranging from $300 million to $600 million, edging out the predicted annual costs of $200 million to $500 million. But in 2017, Trump’s EPA reanalyzed the rule and proposed rolling it back, saying that the agency had now calculated just $30 million to $70 million in annual benefits.

The difference in the conclusions came down to the consideration of wetlands: The 2015 analysis found that protecting wetlands, such as marshes and bogs that purify water, tallied up to $500 million in annual benefits. The Trump administration’s EPA, however, left wetlands out of the calculation entirely, says Kotchen, who analyzed the policy swing in Science in 2017.

Currently, the rule has gone into effect in 26 states, but is still tied up in legal challenges.

It’s an example of how methodology — and what counts as a benefit — can have a huge impact on the apparent value of environmental policies and laws.

The squishiness in analyzing environmental benefits underlies many of the Trump administration’s proposed rollbacks of Obama-era environmental legislation, not just ones about water pollution, Kotchen says. There are guidelines for how such cost-benefit analyses should be carried out, he says, but there’s still room for researchers or government agencies to choose what to include or exclude.

In June, the EPA, then under the leadership of Scott Pruitt, proposed revising the way the agency does cost-benefit analyses to no longer include so-called indirect benefits. For example, in evaluating policies to reduce carbon dioxide emissions, the agency would ignore the fact that those measures also reduce other harmful air pollutants. The move would, overall, make environmental policies look less beneficial.

These sharp contrasts in how presidential administrations approach environmental impact studies are not unprecedented, says Craig, the environmental law professor. “Pretty much every time we change presidents, the priorities for how to weigh those different elements change.”

A lack of sleep can induce anxiety

SAN DIEGO — A sleepless night can leave the brain spinning with anxiety the next day.

In healthy adults, overnight sleep deprivation triggered anxiety the next morning, along with altered brain activity patterns, scientists reported November 4 at the annual meeting of the Society for Neuroscience.

People with anxiety disorders often have trouble sleeping. The new results uncover the reverse effect — that poor sleep can induce anxiety. The study shows that “this is a two-way interaction,” says Clifford Saper, a sleep researcher at Harvard Medical School and Beth Israel Deaconess Medical Center in Boston who wasn’t involved in the study. “The sleep loss makes the anxiety worse, which in turn makes it harder to sleep.”
Sleep researchers Eti Ben Simon and Matthew Walker, both of the University of California, Berkeley, studied the anxiety levels of 18 healthy people. Following either a night of sleep or a night of staying awake, these people took anxiety tests the next morning. After sleep deprivation, anxiety levels in these healthy people were 30 percent higher than when they had slept. On average, the anxiety scores reached levels seen in people with anxiety disorders, Ben Simon said November 5 in a news briefing.

What’s more, sleep-deprived people’s brain activity changed. In response to emotional videos, brain areas involved in emotions were more active, and the prefrontal cortex, an area that can put the brakes on anxiety, was less active, functional MRI scans showed.

The results suggest that poor sleep “is more than just a symptom” of anxiety, but in some cases, may be a cause, Ben Simon said.

Why a chemistry teacher started a science board game company

A physicist, a gamer and two editors walk into a bar. No, this isn’t the setup for some joke. After work one night, a few Science News staffers tried out a new board game, Subatomic. This deck-building game combines chemistry and particle physics for an enjoyable — and educational — time.

Subatomic is simple to grasp: Players use quark and photon cards to build protons, neutrons and electrons. With those three particles, players then construct chemical elements to score points. Scientists are the wild cards: Joseph J. Thomson, Maria Goeppert-Mayer, Marie Curie and other Nobel laureates who discovered important things related to the atom provide special abilities or help thwart other players.
The game doesn’t shy away from difficult or unfamiliar concepts. Many players might be unfamiliar with quarks, a group of elementary particles. But after a few rounds, it’s ingrained in your brain that, for example, two up quarks and one down quark create a proton. And Subatomic includes a handy booklet that explains in easy-to-understand terms the science behind the game. The physicist in our group vouched for the game’s accuracy but had one qualm: Subatomic claims that two photons, or particles of light, can create an electron. That’s theoretically possible, but scientists have yet to confirm it in the lab.

The mastermind behind Subatomic is John Coveyou, who has a master’s degree in energy, environmental and chemical engineering. As the founder and CEO of Genius Games
, he has created six other games, including Ion ( SN: 5/30/15, p. 29 ) and Linkage ( SN: 12/27/14, p. 32 ). Next year, he’ll add a periodic table game to the list . Because Science News has reviewed several of his games, we decided to talk with Coveyou about where he gets his inspiration and how he includes real science in his products. The following discussion has been edited for length and clarity.
SN: When did you get interested in science?

Coveyou: My mom was mentally and physically disabled, and my dad was in and out of prison and mental institutions. So early on, things were very different for me. I ended up leaving home when I was in high school, hopscotching around from 12 different homes throughout my junior and senior year. I almost dropped out, but I had a lot of teachers who were amazing mentors. I didn’t know what else to do, so I joined the army. While I was in Iraq, I had a bunch of science textbooks shipped to me, and I read them in my free time. They took me out of the environments I was in and became extremely therapeutic. A lot of the issues we face as a society can be worked on by the next generation having a command of the sciences. So I’m very passionate about teaching people the sciences and helping people find joy in them.

SN: Why did you start creating science games?

Coveyou: I was teaching chemistry at a community college, and I noticed that my students were really intimidated by the chemistry concepts before they even came into the classroom. They really struggled with a lot of the basic terminology. At the same time, I’ve been a board gamer pretty much my whole life. And it kind of hit me like, “Whoa, wait a second. What if I made some games that taught some of the concepts that I’m trying to teach my chemistry students?” So I just took a shot at it. The first couple of games were terrible. I didn’t really know what I was doing, but I kept at it.

SN: How do you test the games?

Coveyou: We first test with other gamers. Once we’re ready to get feedback from the general public, we go to middle school or high school students. Once we test a game with people face-to-face, we will send it across the world to about 100 to 200 different play testers, and those vary from your hard-core gamers to homeschool families to science teachers, who try it in the classroom.

SN: How do you incorporate real science into your games?

Coveyou: I pretty much always start with a science concept in mind and think about how can we create a game that best reflects the science that we want to communicate. For all of our upcoming games, we include a booklet about the science. That document is not created by Genius Games. We have about 20 to 30 Ph.D.s and doctors across the globe who write the content and edit each other. That’s been a real treat to actually show players how the game is accurate. We’ve had so many scientists and teachers who are just astonished that we created something like this that was accurate, but also fun to play.

Voyager 2 spacecraft enters interstellar space

Voyager 2 has entered interstellar space. The spacecraft slipped out of the huge bubble of particles that encircles the solar system on November 5, becoming the second ever human-made craft to cross the heliosphere, or the boundary between the sun and the stars.

Coming in second place is no mean achievement. Voyager 1 became the first spacecraft to exit the solar system in 2012. But that craft’s plasma instrument stopped working in 1980, leaving scientists without a direct view of the solar wind, hot charged particles constantly streaming from the sun (SN Online: 9/12/13). Voyager 2’s plasma sensors are still working, providing unprecedented views of the space between stars.

“We’ve been waiting with bated breath for the last couple of months for us to be able to see this,” NASA solar physicist Nicola Fox said at a Dec. 10 news conference at the American Geophysical Union meeting in Washington, D.C.

NASA launched the twin Voyager spacecraft in 1977 on a grand tour of the solar system’s planets (SN: 8/19/17, p. 26). After that initial tour was over, both spacecraft continued travelling through the bubble of plasma that originates at the sun.
“When Voyager was launched, we didn’t know how large the bubble was, how long it would take to get [to its edge] and whether the spacecraft could last long enough to get there,” said Voyager project scientist Edward Stone of Caltech.

For most of Voyager 2’s journey, the spacecraft’s Plasma Science Experiment measured the speed, density, temperature, pressure and other properties of the solar wind. But on November 5, the experiment saw a sharp drop in the speed and the number of solar wind particles that hit the detector each second. At the same time, another detector started picking up more high-energy particles called cosmic rays that originate elsewhere in the galaxy.
Those measurements suggest that Voyager 2 has reached the region where the solar wind slams into the colder, denser population of particles that fill the space between stars. Voyager 2 is now a little more than 18 billion kilometers from the sun.

Intriguingly, Voyager 2’s measurements of cosmic rays and magnetic fields — which Voyager 1 could still make when it crossed the boundary — did not exactly match up with Voyager 1’s observations.
“That’s what makes it interesting,” Stone said. The variations are probably from the fact that the two spacecraft exited the heliosphere in different places, and that the sun is at a different part of its 11-year activity cycle than it was in 2012. “We would have been amazed if they had looked the same.”

The Voyagers probably have between five and 10 years left to continue exploring interstellar space, said Voyager project manager Suzanne Dodd from NASA’s Jet Propulsion Laboratory in Pasadena, Calif.

“Both spacecraft are very healthy if you consider them senior citizens,” Dodd said. The biggest concern is how much power they have left and how cold they are — Voyager 2 is currently about 3.6° Celsius, close to the freezing point of its hydrazine fuel. In the near future, the team will have to turn off some of the spacecraft’s instruments to keep the craft operating and sending data back to Earth.

“We do have difficult decisions ahead,” Dodd said. She added that her personal goal is to see the spacecraft last until 2027, for a total of 50 years in space. “That would be fantastic.”

A new implant uses light to control overactive bladders

A new soft, wireless implant may someday help people who suffer from overactive bladder get through the day with fewer bathroom breaks.

The implant harnesses a technique for controlling cells with light, known as optogenetics, to regulate nerve cells in the bladder. In experiments in rats with medication-induced overactive bladders, the device alleviated animals’ frequent need to pee, researchers report online January 2 in Nature.

Although optogenetics has traditionally been used for manipulating brain cells to study how the mind works, the new implant is part of a recent push to use the technique to tame nerve cells throughout the body (SN: 1/30/10, p. 18). Similar optogenetic implants could help treat disease and dysfunction in other organs, too.
“I was very happy to see this,” says Bozhi Tian, a materials scientist at the University of Chicago not involved in the work. An estimated 33 million people in the United States have overactive bladders. One available treatment is an implant that uses electric currents to regulate bladder nerve cells. But those implants “will stimulate a lot of nerves, not just the nerves that control the bladder,” Tian says. That can interfere with the function of neighboring organs, and continuous electrical stimulation can be uncomfortable.

The new optogenetic approach, however, targets specific nerves in only one organ and only when necessary. To control nerve cells with light, researchers injected a harmless virus carrying genetic instructions for bladder nerve cells to produce a light-activated protein called archaerhodopsin 3.0, or Arch. A stretchy sensor wrapped around the bladder tracks the wearer’s urination habits, and the implant wirelessly sends that information to a program on a tablet computer.
If the program detects the user heeding nature’s call at least three times per hour, it tells the implant to turn on a pair of tiny LEDs. The green glow of these micro light-emitting diodes activates the light-sensitive Arch proteins in the bladder’s nerve cells, preventing the cells from sending so many full-bladder alerts to the brain.
John Rogers, a materials scientist and bioengineer at Northwestern University in Evanston, Ill., and colleagues tested their implants by injecting rats with the overactive bladder–causing drug cyclophosphamide. Over the next several hours, the implants successfully detected when rats were passing water too frequently, and lit up green to bring the animals’ urination patterns back to normal.

Shriya Srinivasan, a medical engineer at MIT not involved in the work, is impressed with the short-term effectiveness of the implant. But, she says, longer-term studies may reveal complications with the treatment.

For instance, a patient might develop an immune reaction to the foreign Arch protein, which would cripple the protein’s ability to block signals from bladder nerves to the brain. But if proven safe and effective in the long term, similar optogenetic implants that sense and respond to organ motion may also help treat heart, lung or muscle tissue problems, she says.

Optogenetic implants could also monitor other bodily goings-on, says study coauthor Robert Gereau, a neuroscientist at Washington University in St. Louis. Hormone levels and tissue oxygenation or hydration, for example, could be tracked and used to trigger nerve-altering LEDs for medical treatment, he says.