Feedback: Trial by water


Feedback is our weekly column of bizarre stories, implausible advertising claims, confusing instructions and more


AS THE nights draw in in the northern hemisphere, we return to the subject of unconventional sunscreen. At the height of the London summer we mentioned claims by the Colorado company Osmosis that it had produced a drinkable sun screen (5 July). The "Harmonized H2O UV Neutralizer" is now less prominent on its website, but digging suggests the ingredients are "purified water imprinted with unique, vibrational waves".


Brian Handley alerts us to the UK consumer magazine Which!, which asked the British Association of Dermatologists about the product. The response was: "it's complete nonsense to suggest that drinking water will give you a Sun Protection Factor (SPF) of 30".


And David Geddes forwards a document the company sent him, which begins "This randomized clinical trial..." Its authors appear not to ...


To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.



Smoke without fire: What's the truth on e-cigarettes?


Scroll down to download this guide to the major facts and figures about e-cigarettes


They've been called safe, dangerous, a way to quit smoking – and a way to start. New Scientist sifts through the evidence about e-cigarettes


THE juice bar is clean and bright with a nice-looking selection of drinks and cakes – just like a regular London cafe, but with one big difference: the kind of juice that's on offer.


This is one of thousands of small outlets carving out a share of the growing e-cigarette market. Its main selling point is a vast array of flavoured nicotine liquids or e-juices. There are fruit flavours, minty flavours, fun flavours like cola and grown-up ones like rum, pina colada and even tobacco. After testing a few, I settle for watermelon and almond, plus a rechargeable electronic cigarette kit. And so my experiment begins.


E-cigarettes have been around ...


To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.



White noise for your nose cancels pungent aromas


By combining compounds in just the right mixture, researchers have worked out how to produce the olfactory equivalent of white noise


HAS someone burned the toast and stunk out the kitchen again? Fire up the smell canceller and sniff freely. That's the proposal from two researchers who are applying the principle behind noise cancelling headphones to noses.


Aural and visual signals are easy to manipulate because they are both based on waves, which can be described mathematically, leading to a huge variety of ways to manipulate a signal, like compression, filtering, and so on. It's much harder to write down equations for the complex chemistry behind smells, which is why you can't download the tempting waft of a bacon sandwich.


Now Kush Varshney of the IBM Thomas J. Watson Research Center in New York and his brother Lav Varshney of the University of Illinois at Urbana-Champaign say they have cracked it. The pair have created a mathematical model that predicts how humans perceive the smell of a particular substance based on its physical and chemical properties, by matching a database of compounds to another of perceived smells. One compound could smell 5.6 chalky, -3.2 celery and 0.8 cedar wood, for example, on a rating system from -5 to 10.


To cancel out a smell, they calculate which compounds provide the opposite ratings, giving a zero score across the board. Previous research has shown that an equal blend of around 30 compounds creates "white smell", the neutral olfactory equivalent of white light or white noise. The Varshneys' simulations show a blend of 38 compounds could almost completely cancel out the odours of onion, sauerkraut, Japanese fermented tuna and durian fruit, achieving white smell even in the presence of these notoriously pungent foods (http://ift.tt/1x2kGpH).


The pair haven't built a smell cancelling device yet, but they are confident the maths will check out and it should be possible. They were inspired by IBM's work with its advanced computer Watson, which is currently learning to cook by identifying the flavour compounds different ingredients have in common.


Previous attempts to digitise smells have led to the likes of the oPhone, a device that releases mixtures of compounds to recreate more than 300,000 smells, but these have never really taken off. That's because smells linger, making it difficult to use such devices repeatedly. Cancellation might be able to fix this, as it is more sophisticated than simply masking an existing smell with a more powerful odour.


Noam Sobel of the Weizmann Institute of Science in Rehovot, Israel, invented the concept of white smell and is also working on a cancellation device, though without success so far. "I think the ideas are sound," he says. But if white smell is the future, what does it actually smell of? "It's not very pleasant, but it's not foul. It's not very edible smelling, but it doesn't smell poisonous."


This article appeared in print under the headline "Mix 38 scents to oust any smell"


Issue 2993 of New Scientist magazine


  • New Scientist

  • Not just a website!

  • Subscribe to New Scientist and get:

  • New Scientist magazine delivered every week

  • Unlimited online access to articles from over 500 back issues

  • Subscribe Now and Save




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Left or right-wing? Brain's disgust response tells all


The way your brain reacts to a single disgusting image can be used to predict whether you lean to the left or the right politically.


A number of studies have probed the emotions of people along the political spectrum, and found that disgust in particular is tightly linked to political orientation. People who are highly sensitive to disgusting images – of bodily waste, gore or animal remains – are more likely to sit on the political right and show concern for what they see as bodily and spiritual purity, so tend to oppose abortion and gay marriage, for example.


A team led by Read Montague, a neuroscientist at Virginia Tech in Roanoke, recruited 83 volunteers and performed fMRI brain scans on them as they looked at a series of 80 images that were either pleasant, disgusting, threatening or neutral. Participants then rated the images for their emotional impact and completed a series of questionnaires that assessed whether they were liberal, moderate or conservative.


The brain-imaging results were then fed to a learning algorithm which compared the whole-brain responses of liberals and conservatives when looking at disgusting images versus neutral ones.


For both political groups, the algorithm was able to pick out distinct patterns of brain activity triggered by the disgusting images. And even though liberals and conservatives consciously reported similar emotional reactions to the images, the specific brain regions involved and their patterns of activation differed consistently between the two groups – so much so that they represented a neural signature of political leaning, the team concludes.


One-shot predictor


Conservatives showed increased activity in brain regions previously implicated in processing disgust, such as the basal ganglia and amygdala, but also in a wide range of regions involved in regulating emotion, attention and integrating information. In liberals the brain showed increased activity in different regions, but these were just as diverse.


The team found that these neural signatures of disgust can be used to predict political orientation. "In fact, the responses in the brain are so strong that we can predict with 95 per cent accuracy where you'll fall on the liberal-conservative spectrum by showing you just one picture," says Montague. "This was surprising as there are no other reports where people's response to just one stimulus predicts anything behaviourally interesting."


Darren Schreiber of the University of Exeter, UK, who is also interested in the neuroscience of politics, says this approach is an advance on previous brain-imaging work that highlighted the importance of one or two isolated brain regions on political leaning, rather than entire networks.


And while this is not the first time differences have been found in the brains of liberals and conservatives, Schreiber says that the combination of whole-brain analysis with a sophisticated learning algorithm marks a step forward. "It's not only a powerful replication and extension of previous work, but it's also incredibly accurate."


Journal reference: Current Biology, DOI: 10.1016/j.cub.2014.09.050


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Spoiler-free guide to the science of Interstellar



Continue reading page |1|2



As Christopher Nolan's epic new film opens on an Earth of the near future, it's not quite apocalypse now, but it will be soon. Crops are failing all over the planet. Humanity's final generation has already been born. We've got to get off the planet. And not just off to a nice moon in our solar system: we've got to go Interstellar.


This is going to require some serious science.


The film's hard-science pedigree is guaranteed by its science advisor and executive producer, Kip Thorne, one of the world's leading experts on Einstein's theory of general relativity. "The things he was able to open up for me were far more exotic and exciting than anything I could've come up with as a screenwriter," Nolan told New Scientist.


Some critics have said they wished they'd brushed up on their physics before seeing the film – so here's our spoiler-free guide to everything you need to know before you see Interstellar.


What is this "dust" that threatens the Earth's food supply?

The agent of destruction is a blight fungus. In the film it is sweeping across the world, and has already destroyed wheat and okra as a crop. In the real world, blight is indeed a serious threat – responsible for the Irish potato famine – and a different blight fungus, Ug99, now threatens wheat. Norman Borlaug, dubbed the father of the Green Revolution, said Ug99 "has immense potential for social and human destruction".


Nolan was influenced by the real-life ecological disaster of the Dust Bowl in 1930s North America, when the rich top soil essential to farming dried out and blew away, desolating vast areas and causing famine and mass human displacement – a situation that could yet happen again with the severe, ongoing US drought in the US.


But chin up people. Consider the movie's tagline: the end of Earth will not be the end of us.


How can we ensure the survival of the human race?

Colonising our neighbouring planets and their moons will be the first step. Once we've arrived at our new home we need to grow crops and establish a viable population.


Poor old Anne Hathaway, playing Interstellar's only female astronaut, Amelia Brand, isn't expected to do all this herself – she is taking a whole load of frozen human embryos with her. Presumably some artificial wombs, too.


But as Stephen Hawking has argued, the long-term survival of our species depends on us developing interstellar travel.


Even if we don't render our planet uninhabitable, the sun will eventually swell up and engulf Earth. This won't happen for 5 billion years, but nevertheless, Michael Caine's character in the film – based on Thorne – insists we have to travel through a wormhole to another galaxy. "We must confront the reality that nothing in our solar system can help us," he says.


(Image: Paramount Pictures)


How could we travel to planets beyond our solar system?

It's a long way to the nearest exoplanets. To get there without spending thousands of years on the journey, the options are limited. Physics won't let us go faster than the speed of light, but it will allow for radical shortcuts. There are efforts to devise Star Trek-inspired warp drives, but even if they were possible, they could be deadly. That leaves the main contender: wormholes.


Wormholes are hypothetical tunnels through space-time – predicted by Einstein's general theory of relativity – that can connect distant parts of the universe.


Until recently, wormholes have been seen as unworkable curiosities. That's now changed. Physicists have described how you could make a one big enough to send a message or a spaceship Movie Camera through – or even reunite star-crossed loversMovie Camera.


We can even visualise what it would look like to travel through one – rendered here with slightly cheaper graphics software than in Interstellar.


We won't manage to make a wormhole for a while – that will take a highly advanced civilisation. If we do ever get to go through one, who knows what we'll find when we get out. The crew on the Interstellar spaceship, the Endurance, discover an unwelcome beast at the other side of theirs: a supermassive black hole.


(Image: Paramount Pictures)


What are the real dangers of approaching a black hole?

The beautiful black hole in Interstellar is not just visually stunning, it is scientifically accurate. At the heart of a black hole is a singularity, a point of effectively infinite mass. This exerts a lot of gravity, which drags matter towards it, spiralling into the hole in a vast swirl called an accretion disc.


Kip Thorne worked out the mathematics of what happens to the accretion disc, and found that the intense gravity warps the disc around the black hole, creating the spectacular halo that is one of the movie's visual highlights.


If you fall into a black holeMovie Camera, or get too close to its intense gravity (and somehow survive), you'll notice weird things happening to time. This is a favourite Nolan trope, also used in his mind-bending hit Inception , in which time moved at different speeds depending on the dream state his characters were in.



Continue reading page |1|2


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Today on New Scientist


TTIP: The science of the US-European trade megadeal

It will be the biggest trade deal the world has ever seen – and that means you'll see changes in health, the environment and even happiness


Reading on screens is different – does it matter?

We're beginning to understand how digital devices affect literacy – but don't assume that paper is always better than screens


Why bird divorces are good news for the females

When bird pairs break up females often lay more eggs with a new partner, but the split can be disastrous for the male of the species


Heart ops shrink thanks to surgeon in your vein

A tiny tube with a blade at the end can enter your heart via your neck to fix defects without having to cut open your whole chest


Supernova shock waves create glowing arcs across sky

A forest of mysterious radiation arcs seen across our view of the universe might be down to a supernova-powered bubble expanding towards our sun


Gold origami exerts strange power over light

Sheets of gold one nanoparticle thick have been folded into tiny origami. Dubbed plasmene, the material has some of the weirdest optical properties around


Goodbye, paper: What we miss when we read on screen

Digital technology is transforming the way we read and write. Is it changing our minds too – and if so, for better or worse?


Computers are learning to see the world like we do

It is surprisingly difficult to build computers that can recognise the many different objects we see every day, but they are getting better all the time


Brain decoder can eavesdrop on your inner voice

As you read this, your neurons are firing – that brain activity can now be decoded to reveal the silent words in your head


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Reading on screens is different – does it matter?


We're beginning to understand how digital devices affect literacy – but don't assume that paper is always better than screens


CAN gadgets help to educate young people? Many would say yes: digital literacy is seen as key to a modern upbringing, and screens are being introduced at ever earlier ages. But to others, they're potentially harmful distractions. There are schools that frown on kids toting tablets and universities that ban laptops from lecture theatres.


Many of those who eschew these gadgets have been swayed not by facts and research, but by opinion and experience – teachers concerned that tech-addled kids lack communications skills and focus, for example. That doesn't mean their views should be discounted: rather, it means the research is sorely needed.


Such studies are now under way, although given that phones and tablets are already an integral part of our lives, the researchers are mostly playing catch-up. Early findings suggest screens can indeed affect the ways we read and write (see "Goodbye, paper: What we miss when we read on screen"). The real question is: does that matter?


The same researchers who have identified these effects are quick to point out they are very specific. A gripping yarn on an e-reader may be a perfect fit; a highbrow tome on a phone, maybe not. But that doesn't make digital reading a problem per se. And gadgets' shortcomings must be weighed against their advantages: portability, economy, access to the world's knowledge and so on.


Much-publicised worries about young brains becoming wired up differently are under-researched too. Even if this does happen, it's not clear any changes are deleterious. And in some cases, there may be simple solutions, such as making kids write with styluses rather than keyboards.


That's not to say we shouldn't carefully consider how, when and why children use technology. Screens aren't going away any time soon, and they raise issues that go far beyond literacy. But our approach to their use should not be in thrall to yesterday's values. Only when we know precisely what screens do to us will we know precisely what we should do with them.


This article appeared in print under the headline "Screening for problems"


Issue 2993 of New Scientist magazine


  • New Scientist

  • Not just a website!

  • Subscribe to New Scientist and get:

  • New Scientist magazine delivered every week

  • Unlimited online access to articles from over 500 back issues

  • Subscribe Now and Save




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Why bird divorces are good news for the females


AS THE song would have it, birds, bees and even educated fleas fall in love. For some birds, however, the love turns sour – they get divorced.


Many birds may seem like model parents, with males and females investing more equally in caring for their young than the average mammal. While 85 per cent of bird species are socially monogamous – they form pair bonds and share the workload – divorce is common, occurring in 92 per cent of these species, including the humble great tit (pictured).


By analysing data recorded from across 64 species, Antica Culina at the University of Oxford and colleagues found that birds were most likely to divorce when their breeding success was low (Biological Reviews, doi.org/wnw).


Culina found the signs of a doomed partnership emerge early – pairs in which the female produced a low number of eggs or laid them relatively late were less likely to stick together for a second breeding season. It is possible that unimpressed males may decide they can do better. But females are believed to have some control over the number of eggs they lay, so a small clutch size may actually be a sign of female discontent, says Culina. "It might be that she's already made the decision, and because she doesn't like him very much, she won't make many eggs."


"Divorce can be beneficial or detrimental for an individual," says Tamás Székely of the University of Bath, UK. "By ditching a poorly performing mate, a bird may well hope to find a better one." But he says divorce can have disastrous consequences, especially if there aren't many other males or females to choose from.


Culina's study found that, when birds do break up, it's the females that benefit. "Females who divorce gain better breeding success with a new partner, but males who divorce show no improvement," she says. This might indicate that it is the females that decide it's time to find a new mate. A male wanting to leave a partner risks leaving territory, too – so they may have more to lose.


How does a bird decide whether to stay or go? Do they monitor each other's behaviour, for example? Not much is known, says Székely.


This article appeared in print under the headline "Why bird break-ups are bad news for the boys"


Issue 2993 of New Scientist magazine


  • New Scientist

  • Not just a website!

  • Subscribe to New Scientist and get:

  • New Scientist magazine delivered every week

  • Unlimited online access to articles from over 500 back issues

  • Subscribe Now and Save




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Heart ops shrink thanks to surgeon in your vein


LAST year, a tiny heart surgeon entered the neck of a pig, slipped down its jugular vein and into its still-beating heart.


With the pig's heart pumping, the device cut a small hole in the wall between the two upper chambers before being removed.


The successful test, which mimicked a procedure to fix a heart defect in children, showed that the device could one day be used on a range of operations, including those that currently require cutting open a child's chest and ribs to get at the heart.


A team of researchers led by Pierre Dupont at Boston Children's Hospital built and tested the device, which consists of a long tube with cutting teeth at the end (see diagram). Surgeons can use external ultrasound images to guide it to the heart, and then switch the teeth on remotely (The International Journal of Robotics Research, doi.org/wmc).


To make such a small and dexterous cutting tool, Dupont's team turned to a technique called microelectromechanical systems. This allowed them to build the tool by laying down layers of metal and create moving parts that are only a few microns across.


The tubular section of the device is strong enough to hold the heart tissue in place while it's beating, allowing the cutting head to remove tissue. Pieces that are cut off are whisked down the tube and out of the body. This could be particularly useful for an operation called a septal myectomy, in which excess heart muscle needs to be cut away to restore blood flow to the body.


Sam Kesner, a bioengineer at Harvard University's Wyss Institute, says the device is unique because the tubing that slides it into the heart is stiff enough to give the cutting tool purchase. "Normal catheters can't provide a lot of force. Cutting tissue away is very hard," he says.


This article appeared in print under the headline "Heart ops shrink thanks to surgeon in your vein"


Issue 2993 of New Scientist magazine


  • Subscribe to New Scientist and you'll get:

  • New Scientist magazine delivered every week

  • Unlimited access to all New Scientist online content -

    a benefit only available to subscribers

  • Great savings from the normal price

  • Subscribe now!




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Gold origami exerts strange power over light


SHEETS of gold one nano-particle thick have been folded into tiny origami. Dubbed plasmene, the material has some of the weirdest optical properties around. It could someday enable things like invisibility cloaks and super-efficient solar cells.


Plasmonic materials, such as gold and silver, capture light and transmit it along their surfaces as waves of electrons called plasmons. They can squeeze light into spaces smaller than the laws of physics normally allow.


That makes them tempting materials for use in antennas to pick up light signals, and possibly someday Harry Potter-style invisibility cloaks. But to be useful, they need to be manipulated into the right shape – and building nanoscale shapes out of gold or silver has so far been impossible.


Wenlong Cheng at Monash University in Melbourne, Australia, and his colleagues made the thin material by coating nanocubes of gold and silver in polystyrene, suspending them in a chloroform solution then spreading it over a fine mesh. Chen called the substance plasmene after the famous carbon-based graphene (ACS Nano, doi.org/wmf).


Using the same technology that etches lines into semiconductors to make computer chips, Chen was able to fold the material into almost any shape, including an origami bird.


"That amazed me," says Michael Cortie at the University of Technology Sydney in Australia. The bendy material may also find uses in medical materials and wearable technology.


This article appeared in print under the headline "Gold origami exerts strange power over light"


Issue 2993 of New Scientist magazine


  • New Scientist

  • Not just a website!

  • Subscribe to New Scientist and get:

  • New Scientist magazine delivered every week

  • Unlimited online access to articles from over 500 back issues

  • Subscribe Now and Save




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Supernova shock waves create glowing arcs across sky


Ghostly arcs that haunt the sky may come from an expanding shock-wave shell that is pressing in on our solar system.


Although they are invisible at optical wavelengths, looking at the sky in radio waves, X-rays or gamma rays turns up giant streaks across the heavens. Dubbed "radio loops", they have perplexed astronomers since their discovery in the 1950s.


Previous explanations included the idea that these glowing filaments are the leftover ripples from nearby supernova explosions, or streams of gas that sprout from the Milky Way's centre and sprawl out on galactic scales.


Now, Matias Vidal and his team at the Jodrell Bank Centre for Astrophysics at the University of Manchester, UK, say new data support a different answer. "The idea is that there's an expanding shell of gas that was powered by a number of supernova explosions," he says, and it is now nearing our sun.


The shell, they argue, is the outside edge of a well-known bubble of hydrogen gas whose centre lies about 400 light years from us. The bubble has young, massive stars at its centre.


As this shell expands – powered by supernova blasts and winds from stars inside the bubble &nash; it compresses the magnetic field lines of the galaxy as it ripples through. That squeeze along the bubble's outer edges boosts the local magnetic field strength, forcing electrons and other charged particles along spiralling paths. These spiralling particles in turn give off electromagnetic radiation with unique, polarised signatures – the glowing radio loops.


The filaments are most pronounced where the growing bubble presses into the galaxy's ambient dust and gas particles.


Deeper puzzle


Vidal's team looked at polarised light using NASA's WMAP probe – which observed the oldest light in the universe between 2001 and 2010. They found five previously undiscovered loops and six that were already known about. These glowing arcs are best explained if they hug the shell of a growing bubble that has our sun at its edge, says Vidal.


"I think they make a good case that this is part of this big local structure," says theoretical astrophysicist David Spergel at Princeton University, an expert on WMAP.


The glowing filaments mask a deeper cosmological issue, says Spergel. Earlier this year, the BICEP2 telescope team claimed to see signals from primordial gravitational waves, which would be evidence that the universe went through an exponential growth spurt called inflation in the first moments after the big bang.


But the signal was stymied by foreground emissions from dust inside the Milky Way, which must be carefully accounted for to see the gravitational waves. The radio loops might pose a similar problem for future observations.


That means mapping these filaments and seeing how they vary across radio frequencies is more than just a cool puzzle, Spergel says. "This is one of these cases where one scientist's annoying foreground is another one's fascinating, 50-year-old mystery."


Journal reference: Submitted to Monthly Notices of the Royal Astronomical Society (http://ift.tt/103oBYP)


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Goodbye, paper: What we miss when we read on screen


(Image: Richard Wilkinson)


Digital technology is transforming the way we read and write. Is it changing our minds too – and if so, for better or worse?


WE READ more than ever – three times as much as we did in 1980, according to one study. But we're reading differently. Take a look around a train carriage full of commuters nowadays and you'll probably see more people perusing text on phones and tablets than in newspapers and books.


We're writing differently, too. Not so long ago people at meetings and lectures scribbled away furiously with their pens as they took notes. Today, talks and presentations are accompanied by the manic click-clack of laptop keyboards.


Hurrah, some say. Our smartphones and tablets are expanding our worlds. We now have access to vast libraries literally at our fingertips. Good riddance to shoulder-wrenching textbooks, teetering towers of dusty papers, leaky ...


To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.



Computers are learning to see the world like we do


It is surprisingly difficult to build computers that can recognise the many different objects we see every day, but they are getting better all the time


WHAT animal is in the picture above? Glance at the page, wheels in the brain spin: yeah, that's a bird. The response comes so fast that you barely notice the processing behind it.


If only machines found it that easy. Object recognition is surprisingly tricky for computers. Online web comic xkcd recently poked fun at the problem, bemoaning how arduous it would be to build a system that could determine whether a photo was taken in a national park and contained a bird.


Artist Randall Munroe had thrown down the gauntlet: last week, image host Flickr launched Park or Bird, a website designed to solve the exact problem in the comic. Just drag a photograph into the page, and it will make an educated guess. Gerry Pesavento, senior director of product management at Yahoo in San Francisco, which owns Flickr, says the site doubles as a playful introduction to a genuine problem.


"It's showing that image intelligence is happening very quickly," he says.


Park or Bird wasn't the result of a sudden whim. For the past year, Flickr has been training neural networks to figure out if a given picture has one of 1000 different objects in it, from a cat to a sunset. If Flickr can nail this problem, it will vastly improve the search function for their billions of photos, the firm say – letting people find shots of any item even if the photographer hasn't bothered to tag it.


The promise of object recognition extends far beyond better search through photos. Autonomous cars or people who are blind stand to benefit greatly from a system that can easily and accurately identify people and street signs.


"The sort of algorithms that we're using at Flickr right now are exactly the sort of algorithms that are going to be helping robots see and navigate visually," says Yahoo's Simon Osindero.


Google made waves two years ago with the announcement that it had trained neural networks to spot cats in YouTube videos. Chinese search giant Baidu is also in on the game, offering a translation app that tries to provide the right English word for whatever you have taken a picture of. Amazon's Fire phone comes with a feature that recognises the front covers of books or CDs in the real world and directs you to the relevant shopping website.


Olga Russakovsky at Stanford University in California co-organises the ImageNet Large Scale Visual Recognition Challenge, an annual competition with 1000 categories of objects to identify. In the four years since the challenge began, the quality of the entries has improved remarkably quickly. The winner in 2010 made mistakes 28.2 per cent of the time. Last year's Google-led winning program had an error rate of only 6.7 per cent – only a smidge behind an actual human annotator.


Still, certain objects consistently trip computers up. Competitors in the ImageNet Challenge struggle to differentiate between small and slender hand tools, like spoons and screwdrivers. Things that have a metallic or reflective surface can also be hard to identify. Still, Russakovsky is confident that we are only a few years away from a machine that can parse the world as well as a human.


"I don't think there's anything fundamental stopping us," she says.


This article appeared in print under the headline "It's a bird, right?"


Issue 2993 of New Scientist magazine


  • New Scientist

  • Not just a website!

  • Subscribe to New Scientist and get:

  • New Scientist magazine delivered every week

  • Unlimited online access to articles from over 500 back issues

  • Subscribe Now and Save




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Brain decoder can eavesdrop on your inner voice


As you read this, your neurons are firing – that brain activity can now be decoded to reveal the silent words in your head


TALKING to yourself used to be a strictly private pastime. That's no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world.


"If you're reading text in a newspaper or a book, you hear a voice in your own head," says Brian Pasley at the University of California, Berkeley. "We're trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak."


When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words.


In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain's temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity aloneMovie Camera (PLoS Biology, doi.org/fzv269).


The team hypothesised that hearing speech and thinking to oneself might spark some of the same neural signatures in the brain. They supposed that an algorithm trained to identify speech heard out loud might also be able to identify words that are thought.


Mind-reading


To test the idea, they recorded brain activity in another seven people undergoing epilepsy surgery, while they looked at a screen that displayed text from either the Gettysburg Address, John F. Kennedy's inaugural address or the nursery rhyme Humpty Dumpty.


Each participant was asked to read the text aloud, read it silently in their head and then do nothing. While they read the text out loud, the team worked out which neurons were reacting to what aspects of speech and generated a personalised decoder to interpret this information. The decoder was used to create a spectrogram – a visual representation of the different frequencies of sound waves heard over time. As each frequency correlates to specific sounds in each word spoken, the spectrogram can be used to recreate what had been said. They then applied the decoder to the brain activity that occurred while the participants read the passages silently to themselves (see diagram).


Despite the neural activity from imagined or actual speech differing slightly, the decoder was able to reconstruct which words several of the volunteers were thinking, using neural activity alone (Frontiers in Neuroengineering, doi.org/whb).


The algorithm isn't perfect, says Stephanie Martin, who worked on the study with Pasley. "We got significant results but it's not good enough yet to build a device."


In practice, if the decoder is to be used by people who are unable to speak it would have to be trained on what they hear rather than their own speech. "We don't think it would be an issue to train the decoder on heard speech because they share overlapping brain areas," says Martin.


The team is now fine-tuning their algorithms, by looking at the neural activity associated with speaking rate and different pronunciations of the same word, for example. "The bar is very high," says Pasley. "Its preliminary data, and we're still working on making it better."


The team have also turned their hand to predicting what songs a person is listening to by playing lots of Pink Floyd to volunteers, and then working out which neurons respond to what aspects of the music. "Sound is sound," says Pasley. "It all helps us understand different aspects of how the brain processes it."


"Ultimately, if we understand covert speech well enough, we'll be able to create a medical prosthesis that could help someone who is paralysed, or locked in and can't speak," he says.


Several other researchers are also investigating ways to read the human mind. Some can tell what pictures a person is looking at, others have worked out what neural activity represents certain concepts in the brain, and one team has even produced crude reproductions of movie clips that someone is watching just by analysing their brain activity. So is it possible to put it all together to create one multisensory mind-reading device?


In theory, yes, says Martin, but it would be extraordinarily complicated. She says you would need a huge amount of data for each thing you are trying to predict. "It would be really interesting to look into. It would allow us to predict what people are doing or thinking," she says. "But we need individual decoders that work really well before combining different senses."


This article appeared in print under the headline "Hearing our inner voice"


Issue 2993 of New Scientist magazine


  • Subscribe to New Scientist and you'll get:

  • New Scientist magazine delivered every week

  • Unlimited access to all New Scientist online content -

    a benefit only available to subscribers

  • Great savings from the normal price

  • Subscribe now!




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Today on New Scientist


Seabed feeding frenzy proves dead jellyfish get eatenMovie Camera

Time-lapse imagery of scavengers tucking in proves that dead jellyfish aren't unpalatable after all, so can return nutrients to the sea's food webs


Computer with human-like learning will program itself

The Neural Turing Machine will combine the best of number-crunching with the human-like adaptability of neural networks – so it can invent its own programs


Cargo rocket explosion is a blow for commercial spaceMovie Camera

No one was hurt when the uncrewed Orbital Sciences spacecraft blew up seconds after take-off – but has the reputation of private shuttles been injured?


Cellular alchemy turns skin cells into brain cells

To turn one cell into another you usually need to first rewind them into embryonic-like stem cells. But there is another, potentially safer, way


Trap cells in sound to create strong cartilageMovie Camera

Ultrasound waves can be used to trap cartilage cells and bind them into sheets that can be easily grafted on to damaged tissue


Number of disease outbreaks jumps fourfold since 1980

In the past 30 years, the number of disease outbreaks has increased, as has the number of diseases causing them – infections from animals are a big cause


A killer plague wouldn't save the planet from us

One-child policies and plagues that cut the population won't be enough to fix our ecological problems, models suggest. Only changes in consumption will do that


Coming face to face with a shy thresher shark

Meeting sharks can be a moving experience, says photographer Jean-Marie Ghislain, who works to educate people on the plight of sharks around the world


What one Amazonian tribe teaches us

From female suicide to the nature of being civilised, probing tribal life in the 21st century needs an unflinching, critical eye


Guzzling milk might boost your risk of breaking bones

A study of more than 100,000 Swedes has revealed that drinking a lot of milk is associated with an increased risk of bone fractures and death


The comeback cubs: The great sea otter invasionMovie Camera

After being nearly wiped out a century ago, the sea otter population in Canada is booming. But not everyone is glad to welcome them back


Cold moon Enceladus has heart of warm fluff

Known for shooting spectacular plumes of water into space, Saturn's tiny moon keeps warm thanks to a core that is slushy and soft rather than rock solid


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Seabed feeding frenzy proves dead jellyfish get eaten



Deep in the North Sea off Norway, a jelly-feast is under way – and it's the last thing researchers expected to find.


Daniel Jones of the National Oceanography Centre in Southampton, UK, and his colleagues lowered dead jellyfish down to the seabed on a platter fitted with a time-lapse camera. Previous observations of large blooms of jellyfish dying suggested that the creatures are so unpalatable that they pile up in heaps called jelly lakes, which slowly rot away. These observations were quite limited, however, so Jones's team wanted to find out if they were reliable.


The time-lapse footage was a revelation. It showed a host of scavengers, including hagfish, crabs and lobsters, tucking into the free meal of dead jellyfish, suggesting that scavengers like eating jellyfish after all. The carcasses were polished off in as little as 2½ hours, with barely a scrap remaining. At the height of the feeding frenzies, more than 1000 scavengers joined the feast, including transparent prawns and tiny crustaceans called amphipods.


"The results were very surprising, as we expected slow bacterial degradation, not rapid scavenging by fish and crustaceans," says Jones.


Jellyfish recycled


The finding is important for modelling the effects of jellyfish bloomsMovie Camera – which are increasing as the world warms and pollution makes the oceans become more nutrient-rich – on marine biodiversity. It means that carbon and other nutrients in the carcasses return to the deep-sea food chain after all, challenging the assumption that "jelly lakes" are ecological dead ends that remove carbon from food webs.


"Our results show that much of this carbon in jellyfish could, in fact, make it into deep-sea food webs, fuelling these systems and increasing the numbers and diversity of fish and seabed creatures," says Jones.


One downside of the discovery, however, is that jelly-falls may not make effective sinks for human-produced carbon emissions as some researchers had thought.


Journal reference: Proceedings of the Royal Society B, DOI: 10.1098/rspb.2014.2210


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Computer with human-like learning will program itself


The Neural Turing Machine will combine the best of number-crunching with the human-like adaptability of neural networks – so it can invent its own programs


YOUR smartphone is amazing, but ask it to do something it doesn't have an app for and it just sits there. Without programmers to write apps, computers are useless.


That could soon change. DeepMind Technologies, a London-based artificial-intelligence firm acquired by Google this year, has revealed that it is designing computers that combine the way ordinary computers work with the way the human brain works. They call this hybrid device a Neural Turing Machine. The hope is it won't need programmers, and will instead program itself.


Neural networks, which make up half of DeepMind's computer architecture, have been around for decades but are receiving renewed attention as more powerful computers take advantage of them. The idea is to split processing across a network of artificial "neurons", simple units that process an input and pass it on. These networks are good at learning to recognise pieces of data and classify them into categories. Facebook recently trained a neural network to identify faces with near-human accuracy (read more about how computers are learning to see, on page 24).


While that's impressive, the flip side is that neural networks struggle with basic computational tasks such as copying and storing data. "These neural networks that are so good at recognising patterns – a traditional domain for humans – are not so good at doing the stuff your calculator has done for a long time," says Jürgen Schmidhuber of the Dalle Molle Institute for Artificial Intelligence Research in Manno, Switzerland.


Bridging that gap could give you a computer that does both, and can therefore invent programs for situations it has not seen before. The ultimate goal is a machine with the number-crunching power of a conventional computer that can also learn and adapt like a human.


DeepMind's solution is to add a large external memory that can be accessed in many different ways, which mathematician Alan Turing realised was a key part of ordinary computing architecture, hence the name Neural Turing Machine (NTM). This gives the neural network something like a human's working memory – the ability to quickly store and manipulate a piece of data.


To test the idea, they asked their NTM to learn how to copy blocks of binary data it received as input, and compared its performance with a more basic neural network. The NTM learned much faster, and could reproduce longer blocks with fewer errors. Results were similar for experiments on remembering and sorting lists of data. When the team studied what the NTM was doing, they found its methods closely matched the code that a human programmer would have written (http://arxiv.org/abs/1410.5401). These tasks are extremely basic, but essential if such machines are to create sophisticated software.


Other researchers at Google are also trying to teach computers to learn more complex processes. One team recently published details of a neural network that is capable of learning to read simple code and execute it without first being taught the necessary programming language, a bit like successfully adding two numbers without knowing what addition or numbers actually are (http://arxiv.org/abs/1410.4615).


The mixed architecture used by DeepMind seems sensible, says Chris Eliasmith at the University of Waterloo, Canada. "As humans we classify but we also manipulate the classification," he says. "If you want to build a computer that is cognitive in the way that we are, it is going to require this kind of control."


There are several reasons why this research area is so hot right now. "Digital computers are basically hitting a wall," says Eliasmith. It seems like Moore's law – the trend for microchips to double in capacity every two years – is ending, he says.


And then there are growing concerns over energy efficiency. Conventional computers that could match the human brain, if such a thing could be made, would need multiple full-scale power plants. Firms like IBM are already building neuron-inspired hardware with much lower power requirements, and that means software must also adapt. "If we want to use our most efficient hardware, we have to express our algorithms in a manner which fits," Eliasmith says.


This article appeared in print under the headline "Ditch the programmers"


Future Issue of New Scientist Magazine


  • New Scientist

  • Not just a website!

  • Subscribe to New Scientist and get:

  • New Scientist magazine delivered every week

  • Unlimited online access to articles from over 500 back issues

  • Subscribe Now and Save




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Cargo rocket explosion is a blow for commercial space



A commercial rocket flying cargo to the International Space Station exploded just 14 seconds after lift-off last night, creating a spectacular fireball and shockwave as the fully fuelled rocket crashed back down onto its launch pad.


No one was hurt when the uncrewed spacecraft was destroyed, but the loss will dent attempts by the spacecraft's builder, Orbital Sciences Corporation of Dulles, Virginia, to build confidence in its ability to supply ISS cargo services to NASA.


After a one-day launch delay owing to an unauthorised boat breaching the no-go area around the Mid Atlantic Regional Spaceport at Wallops Island, Virginia, Orbital's Antares rocket was launched at 18.22 EDT (22.22 GMT) on 28 October. It seemed to take off normally but its progress quickly slowed as it lost thrust and the rocket came to a halt after 14 seconds, just after clearing the launch tower. It became engulfed in flames and exploded as it plummeted back down to earth.


"The rocket's ascent stopped and there was some disassembly of the first stage" before the explosion, says former NASA astronaut and ISS commander Frank Culbertson, now vice president of Orbital Sciences, in a post-incident press briefing at the coastal spaceport.


Culbertson also says the spaceport's range safety officer – whose job it is to destroy the rocket if it veers off-course and threatens to hit a populated area – is thought to have sent a signal to destroy the crippled Antares when it became clear that the launch was not proceeding as planned – so it blew up before hitting the ground.


It is not yet certain whether the range safety officer did take such action.


Anomaly investigation


Detailed analysis of high-resolution launch video, coupled with examination of telemetry data from the rocket itself, will now play a central role in analysing what happened, Culbertson said. The company will now lead an with NASA to find out what happened.


At NASA, William Gerstenmaier, associate administrator for human exploration and operations, defended Orbital's record: "Orbital has demonstrated extraordinary capabilities in its first two missions to the station. Launching rockets is an incredibly difficult undertaking, and we learn from each success and each setback."


The Cygnus cargo freighter atop the doomed rocket was carrying astronaut supplies, science experiments and a raft of CubeSat satellites, including 26 earth observation satellites from Planet Labs of San Francisco, California, and the first asteroid mining technology demonstrator satellite from Planetary Resources of Redmond, Washington.


Culbertson says Orbital regrets the loss of the valuable research hardware aboard Cygnus – but is relieved there were no injuries despite people some miles away ducking for cover as debris flew around, as can be seen in this video.


One focus of the investigation will almost certainly be the way the Antares is powered. Its first stage is powered by two reconditioned Russian NK-33 rocket engines – made and kept in storage since the Soviet era – but when refurbished by US firm Aerojet Rocketdyne they are sold as AJ-26 engines. The first two Antares/Cygnus flights to the ISS used such motors without any problems. However, in May, one of the Russian-derived motors failed in a NASA ground test for a 2015 mission – and just what happened has not yet been explained.


Use of Russian engines as the US tries to wrest human spaceflight away from Soyuz – by replacing the space shuttle with new US-made manned vehicles from Boeing and SpaceX – is controversial in the US.


Elon Musk, founder of SpaceX, described the use of old, refurbished Russian engines as akin to "the punchline to a joke" in a 2012 interview. However he tweeted last night that he hopes Orbital – his rival in cargo-to-the-ISS market – gets back on track soon.


Motor problems?


Another angle for investigators may involve the second-stage technology on Antares. The Antares that exploded was lofting a first-of-its-kind second stage that had not flown before. Powered like previous second stages by a solid rocket motor, the new one was a higher-powered model designed to loft heavier Cygnus vehicles.


But adjusting spacecraft capabilities has caused issues in the past when guidance software was not adjusted to take account of these new capabilities. For instance, in 1996 ESA lost its four-satellite Cluster mission when the first flight of the Ariane 5 rocket failed – because software designed for the previous, much lighter Ariane 4 had not been adjusted to the new rocket's demands.


However, spaceflight engineering consultant Rand Simberg of Jackson, Wyoming thinks a second-stage issue an unlikely cause. "It looked like a first-stage failure. But they have to consider every possibility," he says.


But the explosion may not have any serious, long-term implications because Antares's first stage is slated to be replaced a few years down the road, says Jonathan McDowell of the Harvard-Smithosonian Center for Astrophysics. "Both NASA and Orbital's commercial rivals have had close calls from time to time and it could easily have been another company's rocket going bang."


Planetary Resources says yesterday's destruction of its Arkyd 3 space telescope technology testing satellite will still be followed, as planned, by the launch of the Arkyd 6, a twice-as-big test spacecraft, in the third quarter of 2015. This will test the momentum wheels that will let the company's space telescopes track asteroids, says chief engineer Chris Lewicki, as well as trialling the laser and infrared imaging technology that will assess potential for mining.


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Cellular alchemy turns skin cells into brain cells


Move over stem cells. A different kind of cellular alchemy is allowing cells to be converted directly into other tissues to treat disease or mend injuries.


Stem cells have long been touted as the future of regenerative medicine as they can multiply indefinitely and be turned into many different cell types. Ideally, this would take a personal approach – a patient's own cells would be converted into whatever type of cell is required to fix their injury or treat their symptoms. Earlier this year, for instance, people with age-related macular degeneration, the most common cause of blindness in the West, had retinal cells made from their own stem cells injected into their eyes.


Mature cells can be converted into stem cells by exposing them to a cocktail of chemicals that reverts them back to an embryonic-like state. Another set of chemicals is then used to turn the cells into the desired tissue type.


Skipping the stem cell stage would be more efficient, says Andrew Yoo of Washington University in St Louis, Missouri, and would reduce the chance that the new tissue could grow into a tumour – a risk with stem cells because of their capacity to regenerate.


Just add chemicals


Yoo has now managed to do just that, using a process known as "transdifferentiation". His team have turned human skin cells into medium spiny neurons, the cells that go wrong in Huntington's disease.


To the skin cells, the team added two short snippets of genetic material called microRNAs. MicroRNAs are signalling molecules and the two they picked turn on genes in brain cells during embryonic development. They also added four transcription factors – another kind of signalling molecule – to turn on genes normally active in medium spiny neurons.


Within four weeks the skin cells had changed into MSNs. When put into the brains of mice, the cells survived for at least six months and made connections with the native tissue. "This is a very cool result," says Ronald McKay of the Lieber Institute for Brain Development in Baltimore.


The team's next step is to transplant the cells into mice with a version of Huntington's to see if the new neurons reduce their symptoms.


Cocktail of cells


"Being able to produce cells with medium spiny neuron characteristics directly without first having to generate stem cells is impressive," says Edward Wild of University College London. "Using this offers the tantalising prospect of cell replacement treatments."


Wild points out, however, that before this approach can be used on people with Huntington's, researchers would first have to correct the faulty genetic mutation in their skin cells. And while medium spiny neurons are the first to degenerate in the disease, other brain cells may also be affected. "When it comes to cell replacement we should probably be aiming for a cocktail of cells," says Wild.


A few examples of transdifferentiation have previously been reported, including the creation of heart and pancreas cells.


Journal reference: Neuron, DOI: 10.1016/j.neuron.2014.10.016


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



Trap cells in sound to create strong cartilage



Using sound to suspend cells could be key to growing strong cartilage to repair knee damage, says the team that has developed a device to continuously hold growing cells in a sound wave.


When new tissues or body parts are required, they are usually created by covering a scaffold outlining the organ's natural structure with the recipient's cells. But the scaffold can degrade over time, weakening the structure of the implant.


A team led by Peter Glynne-Jones of the University of Southampton, UK, wondered whether they could come up with a way to create artificial tissues using just the person's cells. Cartilage was the ideal starting material as it has a relatively simple structure that is made up of a single cell type. They decided to try sound waves as the variations in pressure could be used to control where the cells end up.


The team sent ultrasound waves bouncing back and forth inside a tiny, fluid-filled bioreactor holding cells taken from the top of the femur. Sure enough, the growing cells accumulated in the nodes of the wave, lining up in a layer before binding together. "The cell filaments pull the sheet into a thick, pancake-like structure," says Glynne-Jones.


Like the real thing


After 21 days the cells had become tiny sheets of cartilage with similar strength to natural cartilage.


"What's exciting is that we mechanically tested the cartilage and it's comparable to cartilage found in the human body," says Glynne-Jones. "We think that ultrasound is playing a key part in stimulating the cells to produce the better cartilage." That's because the mechanical stimulation of the cells with the ultrasound can be controlled to a high degree, ensuring the correct mix of proteins are generated to create strong cartilage.


When the sheets were grafted on to a sample of damaged human cartilage they successfully fused with the existing tissue.


"The method removes the challenge of synthesising biomaterial scaffolds," says biomedical engineer Julian Chaudhuri at the University of Bradford, UK. "It's an easy, non-intrusive method that has much clinical potential."


So far, the team has only produced small patches of cartilage so they are now hoping to scale up their approach. The ultimate goal is to produce cartilage from stem cells taken from a recipient's own bone marrow.


If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



What one Amazonian tribe teaches us



Awajún wives use the threat of suicide to make men behave better (Image: Victor Rojas)


From female suicide to the nature of being civilised, probing tribal life in the 21st century needs an unflinching, critical eye


ON 26 January 1987, Time magazine ran an article in which US presidential candidate George H. W. Bush, replying to the suggestion that he should consider where to take the country, was quoted poking fun at "the vision thing". This was widely ridiculed in the US because it violated the clear expectation of Americans that their leaders have such a "vision".


Nothing strange there: most societies expect their leaders to have a view of how they think the future will or should be, and how this defines their duties to those they lead.


Vision of one sort or another plays a big role in Upriver by Michael Brown. This is his effort to understand the spiritual and political beliefs and practices of the Awajún people, who live in the Peruvian Amazon at the foothills of the Andes. The book details his research as he lived with them between 1976 and 1978, as well as his later reflections on experiences and observations gained during subsequent visits.


Brown has a passage that could easily have described what Americans expected of their president. He writes that the Awajún "believed that the well-being of men, and, to a lesser extent, of women, depended on the acquisition of powerful visions".


Upriver is one of the best books I have read on Amazonian peoples in a long while. Brown is even-handed and insightful, and he writes with flair and clarity. He takes an unflinching, clear-eyed look at tribal life and at the Awajún's difficult encounters with the outsiders with whom they have had varying degrees of contact with for almost 500 years.


And there's an old question, dating back at least to the 18th-century French philosopher Jean-Jacques Rousseau, that concerns him deeply. He writes of the "contrast between the civilized and the not", claiming that "there is no more foundational question in the social sciences" than understanding this contrast. He links this to the Awajún's struggles over the past 40 years to maintain their identity, their security and their lands against strong multinational forces.


The question of what it means to be civilised recurs in many books on Amazonian peoples. What makes Brown's book stand out is his willingness to criticise all sides, including the Awajún, in his quest to arrive at a deeper, non-sloganeering understanding of the dynamics between the Awajúns and "civilizados", and among the Awajún themselves.


Along the way he makes observations that will bother those who idealise tribal societies. For example, he asks why the Awajún offer astoundingly shallow characterisations of the inner psychological states and motivations of other Awajúns, as well as outsiders.


This superficiality is deeply curious, given the potential advantage in understanding the motives of other Awajún's for killing and suicide. After all, more than half of all deaths of Awajún men are due to murder, while more than a third of deaths among Awajún women are suicide.


Suicide is unknown in some Amazonian societies – the Pirahã for example – and common in others, such as the Suruwahá. In Awajún society, suicide functions partially as a threat for women to ensure that men treat them better. This is because when a woman commits suicide, her husband is likely to be condemned by the community – an act potentially culminating in his deceased wife's male relatives murdering him. Clearly, men fear this danger, and the book gives examples of them treating their wives better on the basis of the belief that they might commit suicide otherwise.


In passing, though, Brown illustrates the pettiness and self-righteous silliness of some of his fellow anthropologists. He berates their unfortunately common tendency to flip between research and advocacy, between just doing good science – and standing up for tribal communities.


Diverse benefits


Brown also looks at sorcery, violence, daily life and sex, at the ways in which Awajún society is like or unlike other Amazonian societies, at the larger Peruvian culture, at peace, health and illness. His experiences, descriptions and understandings will resonate with all who have had the privilege of living among a tribal society. And for those who have never enjoyed this experience, Upriver will make you wish you had.


But perhaps the book's most important lesson is its portrayal of human diversity as crucial for the understanding of our species. Although the book doesn't successfully answer its core question – explaining the contrast between "civilised" and tribal societies, and what being "civilised" means now – Brown does show a plurality of visions the Awajún people have for their own future. And through these visions, we are better able to understand this Amazonian society and, ultimately, our species.


This article appeared in print under the headline "One world, many visions"


Daniel Everett is Dean of Arts and Sciences at Bentley University, Waltham. Massachusetts


Issue 2992 of New Scientist magazine


  • Subscribe to New Scientist and you'll get:

  • New Scientist magazine delivered every week

  • Unlimited access to all New Scientist online content -

    a benefit only available to subscribers

  • Great savings from the normal price

  • Subscribe now!




If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.



The comeback cubs: The great sea otter invasion


After being nearly wiped out a century ago, the sea otter population in Canada is booming. But not everyone is glad to welcome them back


IT'S shortly after dawn on Canada's west coast. We're standing on a rocky islet, and below us are the animals we have come to watch: about 15 sea otters are grooming, snacking and snoozing in the ocean. Then four fishing skiffs zoom by and the otters scatter.


I'm here with marine ecologist Erin Rechsteiner, who has been watching sea otters at this particular spot since they first arrived in autumn 2013. The raft of males, numbering up to 130 animals in winter and spring, is a sea otter vanguard.


"We're right on the edge of their range, for the moment," Rechsteiner says. When the males turned up, they feasted on sea urchins. "We're watching them shift to different foods now that they've been here for ...


To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.