Artificial Intelligence In Humanoid Robots

Artificial Intelligence In Humanoid Robots

When people think of Artificial Intelligence (AI), the major image that pops up in their heads is that of a robot gliding around and giving mechanical replies. There are many forms of AI but humanoid robots are one of the most popular forms. They have been depicted in several Hollywood movies and if you are a fan of science fiction, you might have come across a few humanoids. One of the earliest forms of humanoids was created in 1495 by Leonardo Da Vinci. It was an armor suit and it could perform a lot of human functions such as sitting, standing and walking. It even moved as though a real human was inside it.

Initially, the major aim of AI for humanoids was for research purposes. They were being used for research on how to create better prosthetics for humans. Now, humanoids are being created for several purposes that are not limited to research. Modern-day humanoids are developed to carry out different human tasks and occupy different roles in the employment sector. Some of the roles they could occupy are the role of a personal assistant, receptionist, front desk officer and so on.

The process of inventing a humanoid is quite complex and a lot of work and research is put into the process. Most times, inventors and engineers face some challenges. First-grade sensors and actuators are very important and a tiny mistake could result in glitching. Humanoids move, talk and carry out actions through certain features such as sensors and actuators.

People assume that humanoid robots are robots that are structurally similar to human beings. That is, they have a head, torso, arms and legs. However, this is not always the case as some humanoids do not completely resemble humans.

Some are modeled after only some specific human parts such as the human head. Humanoids are usually either Androids or Gynoids. An Android is a humanoid robot designed to resemble a male human while gynoids look like female humans.

Humanoids work through certain features. They have sensors that aid them in sensing their environments. Some have cameras that enable them to see clearly. Motors placed at strategic points are what guide them in moving and making gestures. These motors are usually referred to as actuators.

A lot of work, finances and research are put into making these humanoid robots. The human body is studied and examined first to get a clear picture of what is about to be imitated. Then, one has to determine the task or purpose the humanoid is being created for. Humanoid robots are created for several purposes. Some are created strictly for experimental or research purposes. Others are created for entertainment purposes. Some humanoids are created to carry out specific tasks such as the tasks of a personal assistant using AI, helping out at elderly homes, and so on.

The next step scientists and inventors have to take before a fully functional humanoid is ready is creating mechanisms similar to human body parts and testing them. Then, they have to go through the coding process which is one of the most vital stages in creating a humanoid. Coding is the stage whereby these inventors program the instructions and codes that would enable the humanoid to carry out its functions and give answers when asked a question.

Doesn’t sound so difficult, right? However, it would be foolhardy to think that creating a humanoid is as easy as creating a kite or a slingshot in your backyard. Although humanoid robots are becoming very popular, inventors face a few challenges in creating fully functional and realistic ones. Some of these challenges include:
  • Actuators: These are the motors that help in motion and making gestures. The human body is dynamic. You can easily pick up a rock, toss it across the street, spin seven times and do the waltz. All these can happen in the space of ten to fifteen seconds. To make a humanoid robot, you need strong, efficient actuators that can imitate these actions flexibly and within the same time frame or even less. The actuators should be efficient enough to carry a wide range of actions.
  • Sensors: These are what help the humanoids to sense their environment. Humanoids need all the human senses: touch, smell, sight, hearing and balance to function properly. The hearing sensor is important for the humanoid to hear instructions, decipher them and carry them out. The touch sensor prevents it from bumping into things and causing self-damage. The humanoid needs a sensor to balance movement and equally needs heat and pain sensors to know when it faces harm or is being damaged. Facial sensors also need to be intact for the humanoid to make facial expressions, and these sensors should be able to carry a wide range of expressions.

Making sure that these sensors are available and efficient is a hard task:

  • AI-based Interaction: The level at which humanoid robots can interact with humans is quite limited. This where Artificial Intelligence is critical. It can help decipher commands, questions, statements and might even be able to give witty, sarcastic replies and understand random, ambiguous human ramblings.

However, some humanoid robots are so human-like and efficient, that they have become quite popular. Here are a few of them.

  1. Sophia: This is the world’s first robot citizen. She was introduced to the United Nations on October 11, 2017. On October 25th, she was granted Saudi Arabian citizenship, making her the first humanoid robot ever to have a nationality.

Sophia was created by Hanson robotics and can carry out a wide range of human actions. It is said that she is capable of making up to fifty facial expressions and can equally express feelings. She has very expressive eyes and her Artificial Intelligence revolves around human values. She has an equal sense of humor. This particular humanoid was designed to look like the late British actress, Audrey Hepburn. Since she was granted citizenship, Sophia has attended several interviews, conferences and is now one of the world’s most popular humanoids.

  1. The Kodomoroid TV Presenter: This humanoid robot was invented in Japan. Her name is derived from the Japanese word for child- Kodomo- and the word ‘Android’. She speaks a number of languages and is capable of reading the news and giving weather forecasts.

She has been placed at the Museum of Emerging Science and Innovation in Tokyo where she currently works.

  1. Jia Jia: This humanoid robot was worked on for three years by a team at the University of Science and Technology of China before its release. She is capable of making conversations but has limited motion and stilted speech. She does not have a full range of expressions but the team of inventors plans to make further developments and infuse learning abilities in her. Although her speech and vocabulary need further work, she is still fairly realistic.

Humanoid robots are here to stay and over time, with AI making progress, we might soon find them everywhere in our daily lives.

Open-access pioneer Randy Schekman on Plan S and disrupting scientific publishing

Open-access pioneer Randy Schekman on Plan S and disrupting scientific publishing

Nobel laureate Randy Schekman shook up the publishing industry when he launched the open-access journal eLife in 2012.

Armed with millions in funding from three of the world’s largest private biomedical charities — the Wellcome Trust, the Max Planck Society and the Howard Hughes Medical Institute — Schekman designed the journal to compete with publishing powerhouses such as NatureScience and Cell. (Nature’s news team is independent of its journal team and its publisher, Springer Nature.)

eLife experimented with innovative approaches such as collaborative peer review — in which reviewers work together to vet research — that caused ripples in scientific publishing.

And for the first few years, researchers could publish their work in eLifefor free. That came to an end in 2017, because the journal needed more revenue streams to help it to grow.

Schekman stepped down from eLife on 31 January to chair an advisory council for the Aligning Science Across Parkinson’s initiative, funded by the Sergey Brin Family Foundation in San Francisco, California, to better coordinate research into Parkinson’s disease.

Nature’s news team asked Schekman about the impact he thinks eLifehas had on scholarly publishing, and about the future of open-access journals.

eLife?

eLife started with a quite substantial dowry from the Wellcome Trust, the Max Planck Society and the Howard Hughes Medical Institute. It was a clean slate, and we could do more or less what we wanted as long as it was successful. Success was not defined by impact factor — we were firmly committed to dampening its influence in science. It was defined by the kinds of papers that people send to us for publication: we would judge it a success if our own board members sent us their best work to publish — some have and some not yet.

One of the principal decisions we made was that all choices of which papers to include should come from working scientists. I took advantage of my experiences as editor-in-chief from 2006–11 at the Proceedings of the National Academy of Sciences (PNAS) to cook up this idea of reviewers consulting each other when evaluating a paper. This has become a unique feature of eLife.

What are the benefits and drawbacks of eLife’s practice of collaborative peer review?

There are many benefits. When you agree to review a paper for eLife, you know that your identity will be shared with other reviewers, so you can’t hide behind your anonymity. People can’t say things that they can’t defend. When a paper is considered appropriate for revision, the board member who is monitoring the review often writes the letter to the author. They organize it so that only the key points are written in a summary, which is helpful for the authors.

One downside is that it creates a bit more work, because the reviewers are not done when they submit their reviews. It takes time to have that conversation to craft a decision letter. The reviewers tell me that they enjoy this process.

Another potential downside is that you can have an assistant professor whose opinion is going against that of an established person in their field — and the assistant professor might not say what they think. But I’ve only had one reviewer tell me they felt intimidated during the process. My feeling is that it might be quite the opposite: the younger person is closer to the technical aspects of the work, and in many cases, young people who agree to review a paper want to prove themselves. If it were a problem, we would find it difficult to get young people to serve as reviewers, and we don’t.

What do you think of Plan S, an initiative to make all papers open access on publication?

I’m very supportive of this. Open access is the future. Commercial journals have been fighting against this very hard because it poses a clear danger to their profit margin. The public has paid for this research, so they deserve to have access to it.

However, there are legitimate expenses of publishing that need to be made clear. PNAS is published by a science society that does not make a profit, and it recently said that it estimates the cost of publishing a paper to be US$6,000, so any cap on article-processing charges that could come from Plan S might have to allow for this. But $6,000 seems very high to me. It might be that journals may have different article-processing charges depending on their selectivity.

How do you think Plan S will affect scholarly publishing?

There will be a shakedown in the business. Some journals will lose out. Publishing is not a static business — the advent of the preprint server has really changed things, for example. Journals are going to change, and Plan S could have a strong influence.

Do you think that eLife can survive in a Plan S world without extra funding?

The journal still receives income from charitable funders, as well as from article-processing charges. But we hope for eLife to be self-sustaining within two or three years. The financial models we have drawn show that it can be done. We want to keep having the same high standards and peer-review mechanisms going forward. We believe that eLife has the bandwidth to grow maybe two-fold in submissions — and if we do this, we can sustain ourselves without charitable funding.
Deal reveals what scientists in Germany are paying for open access

Deal reveals what scientists in Germany are paying for open access

Project Deal, a consortium of libraries, universities, and research institutes in Germany, has unveiled an unprecedented deal with a major journal publisher—Wiley—that is drawing close scrutiny from advocates of open access to scientific papers.

The pact, signed last month but made public this week, has been hailed as the first such country-wide agreement within a leading research nation. (Only institutions in the United States, China, and the United Kingdom publish more papers.) It gives researchers working at more than 700 Project Deal institutions access to the more than 1500 journals published by Wiley, based in Hoboken, New Jersey, as well as the publisher’s archive. It also allows researchers to make papers they publish with Wiley free to the public at no extra cost.

This business arrangement, known as a “publish and read” deal, has been touted as one way to promote open-access publishing. But until this week, a key part of the Wiley agreement—how much it will cost—had been secret.

Now, the numbers are out. Germany will pay Wiley €2750 for each paper published in one of the publisher’s so-called hybrid journals, which contain both paywalled and free papers. The contract anticipates researchers will publish about 9500 such papers per year, at a cost of €26 million. In addition, researchers will get a 20% discount on the price of publishing in Wiley journals that are already open access.

The deal is an important step toward more open access in scientific publishing, but the per paper fee of €2750 seems high, says Leo Waaijers, an open-access advocate and retired librarian at the Delft University of Technology in the Netherlands. Dutch researchers are paying Wiley just €1600 per paper under a similar deal in the Netherlands, he notes. “It’s the same process, the same product, so why the price difference?” he says.

The explanation is that Germany’s deal with Wiley was designed to be “more or less budget-neutral,” says Gerard Meijer, a physicist at the Fritz Haber Institute, part of the Max Planck Society in Berlin, and one of the negotiators for Project Deal. The goal was to keep Germany’s 2019 payments to Wiley about the same as they were in 2018, he says. And as a larger country with more institutions, Germany paid more in subscription fees to Wiley than the Netherlands. That translated to a higher article publishing fee. But the difference is that papers from Project Deal researchers will now be freely available around the world. In addition, some institutions have gained access to journals that they did not have access to before.

One advantage of the deal is that German researchers will no longer be paying twice for Wiley’s hybrid journals—once for a subscription, and again if they want to make a paper free—says Lidia Borrell-Damian of the European University Association in Brussels. “Germany seems protected from double-dipping … and that’s important,” she says.

Eventually, Waaijers hopes German institutions will be able to negotiate lower open-access publishing fees. But he sees the current contract, which runs for 3 years, as a good first step. “I think it is not possible for Germany to say to Wiley at the moment: ‘We want a contract for 1600 [euros] per article,’” he says. “That would mean an enormous step back financially for Wiley, and they are absolutely not prepared to make that step.”

The fact that the details of the German contract have become public is also important, Borrell-Damian says. “Contracts should be public because this is about public money spent,” she says. And if other countries sign similar deals, and the details become public, then “the whole game of price comparison may start,” Waaijers says. And that, open-access advocates say, could produce pressure for even lower publishing fees.

Expanding the Gene Editing Toolbox: Scientists Sharpen Their Molecular Scissors

Expanding the Gene Editing Toolbox: Scientists Sharpen Their Molecular Scissors

Wake Forest Institute for Regenerative Medicine (WFIRM) scientists have figured out a better way to deliver a DNA editing tool to shorten the presence of the editor proteins in the cells in what they describe as a “hit and run” approach.

CRISPR (clustered regularly interspaced short palindromic repeats) technology is used to alter DNA sequences and modify gene function. CRISPR/Cas9 is an enzyme that is used like a pair of scissors to cut two strands of DNA at a specific location to add, remove or repair bits of DNA. But CRISPR/Cas9 is not 100 percent accurate and could potentially cut unexpected locations, causing unwanted results.

“One of the major challenges of CRISPR/Cas9 mRNA technologies is the possibility of off-targets which may cause tumors or mutations,” said Baisong Lu, Ph.D, assistant professor of regenerative medicine at WFIRM and one of the lead authors of the paper. Although other types of lentivirus-like bionanoparticles (LVLPs) have been described for delivering proteins or mRNAs, Lu said, “the LVLP we developed has unique features which will make it a useful tool in the expanding genome editing toolbox.”

To address the inaccuracy issue, WFIRM researchers asked the question: Is there a way to efficiently deliver Cas9 activity but achieve transient expression of genome editing proteins? They tested various strategies and then took the best properties of two widely used delivery vehicles – lentivirus vector and nanoparticles – and combined them, creating a system that efficiently packages Cas9 mRNA into LVLPs, enabling transient expression and highly efficient editing.

Lentiviral vector is a widely used gene delivery vehicle in research labs and is already widely used for delivering the CRISPR/Cas9 mRNA technology for efficient genome editing. Nanoparticles are also being used but they are not as efficient in delivery of CRISPR/Cas9.

The WFIRM team published its findings in a paper published recently in the journal Nucleic Acids Research.

“By combining the transient expression feature of nanoparticle-delivery strategies while retaining the transduction efficiency of lentiviral vectors, we have created a system that may be used for packaging various editor protein mRNA for genome editing in a ‘hit and run’ manner,” said Anthony Atala, M.D., director of WFIRM and co-lead author of the paper. “This system will not only improve safety but also avoid possible immune response to the editor proteins, which could improve in vivo gene editing efficiency which will be useful in research and clinical applications.”

This article has been republished from materials provided by Wake Forest Institute for Regenerative Medicine.

Note: material may have been edited for length and content. For further information, please contact the cited source.

Chinese gene-editing scientist He Jiankui may have made the twins smarter, scientists say

Chinese gene-editing scientist He Jiankui may have made the twins smarter, scientists say

The controversial Chinese scientist who shocked the world by claiming he edited the genes of twin sisters may have inadvertently enhanced their brains, scientists say.

He Jiankui announced last November he had used technology known as CRISPR to alter the embryonic genes of twins Lulu and Nana in an attempt to protect them from infection with the HIV virus carried by their father — a move that received widespread international condemnation.

New research published in the journal Cell last Thursday claims the same alteration to the girls’ DNA — deleting a gene called CCR5 — not only makes mice smarter but also improves human brain recovery after a stroke and could be linked to greater success in school, MIT Technology Review reported.

But Alcino J Silva, a neurobiologist at the University of California, Los Angeles (UCLA), whose lab uncovered the major new role for the CCR5 gene in memory and the brain’s ability to form new connections, said the exact effect on the girls’ cognition was impossible to predict.

He added that was “why it should not be done”.

While there was no proof that Mr He aimed to enhance the babies’ intelligence, he told an international human genome editing summit in Hong Kong last November that he was aware of a previous study co-authored by Dr Silva in 2016 that showed removing the CCR5 gene from mice significantly improved their memory.

The team had looked at more than 140 different genetic alterations to find which made mice smarter.

I saw that paper, and I believe it needs more independent verification, he said in a response to a question asking if he had inadvertently enhanced cognitive ability of the gene-edited babies.

Mr He also told the panellists he was “against using genome editing for enhancement”.

Regardless of Mr He’s intentions, the new research forms part of increasing evidence that the CCR5 gene plays a significant role in the brain.

According to MIT Technology Review, the latest paper on CCR5 also showed that people missing at least one copy of the CCR5 gene appear to perform better at school, therefore suggesting a link to everyday intelligence.

Could it be conceivable that at one point in the future we could increase the average IQ of the population? I would not be a scientist if I said no, Dr Silva told MIT Technology Review.

The work in mice demonstrates the answer may be yes. But mice are not people.

We simply don’t know what the consequences will be in mucking around. We are not ready for it yet.

Mr He, who was fired from his role at the Southern University of Science and Technology in Shenzhen, has not been seen in public since last November, shortly after his gene-editing announcement.

He is now believed to be staying in a heavily guarded university-owned apartment in Shenzhen.

Chinese authorities have also put a halt to all forms of research like Mr He’s and ordered universities to review all research work on gene editing.

Are genetically engineered crops less safe than classically-bred food?

Are genetically engineered crops less safe than classically-bred food?

Crops and foods today are not what they used to look like.

Farmers and plant breeders have been modifying plant genes since the earliest human communities were formed and farming took hold in order to develop crops that better resist pests and foods with improved nutrition and taste.

Biotechnology proponents, particularly agro-biotechnology corporations, like to claim that humans have been genetically-modifying crops for thousands of years. Biotech advocates say that modern genetic techniques, including GMOs and CRISPR gene editing, are just a continuation of this time-tested process.  That’s true, sort of.  Modern corn, bananas, eggplant, Brussels sprouts, and frankly almost every food we eat have been altered in some way by humans. Advances in technology, they say, have made genetic modification more precise, safer and healthier than ever before, so we should embrace them with streamlined regulatory oversight.

By and large, the public has been queasy about endorsing that view, no matter that it is overwhelmingly held in the mainstream science community. Many consumers, often stoked by anti-biotech advocates and marketing campaigns from organic producers, worry that the innovations introduced since the approval of the first GMO crops in the United States in 1996 might be something uniquely different and might introduce unintended and dangerous side effects, could be bad for human health or may be problematic for the environment.

Case study: Teosinte to corn

Traditional breeding of crops existed since the beginning of human farming communities. Consider corn, which supplies about 21 percent of human nutrition across the globe. Scientists now believe it is the descendant of an ancient wild grass with relatives in Mexico today known as teosinte. It had kernels, but instead of the luscious ones you are familiar with today, it had inedible, black ones that could crack your teeth. That was before humans intervened to bend nature.

Beginning about 10,000-7,000 years ago, our ancestors set up field laboratories—yes, that ugly word often used by biotech critics to diss recently-bred crops—to randomly experiment on this odd grass with hard buds. Through trial and error, cobs became larger and slightly more edible over the centuries, and with more rows of kernels, eventually taking on the form of modern maize. Modern sweet corn yields 100 times more than teosinte, a testament to genetic modification.

Teosinte to corn

Credit image via Vox at: https://www.vox.com/2014/10/15/6982053/selective-breeding-farming-evolution-corn-watermelon-peaches

Today, crop breeding encompasses a whole range of techniques.The Genetic Literacy Project thought it might be instructive to make available an infographic that illustrates the various methods of crop genetic modification, including how many genes are affected and what types of regulation exists for each technology. We thought it important to specifically and accurately define genetic modification techniques, so that consumers are not overly fearful or overly optimistic about the risks and benefits.

We illustrate that traditional breeding, which most consumers are not worried about, is actually the least precise but also the least regulated. Newer biotechnologies are more precise, yet counterintuitively, from a science perspective, are more regulated. Should consumers be more concerned about one type of modification versus another? The evidence suggests ‘no.’ Although many consumers focus most on the process used to create new crops and food, scientists and regulatory agencies in the US and Canada typically focus on products and their safety. This is because various processes can be used to create products with the same level of health and safety. For example, mutagenesis and gene editing (two different processes) could both be used to create herbicide-resistant wheat (the same product).

Lots of genes are swapped at once in traditional breeding, a process that scientists consider “messy.” Some traditional breeding techniques are high-tech, such as marker-assisted breeding. While breeders have been able to cross plants with their wild relatives (called a wide cross) to produce hybrids, the possibilities of using genes from distantly-related or other species are limited.

Related article: Epigenetic changes in plants could help produce food crops better suited to harsh environments

In the 1920s and 1930s, scientists explored the effect of radiation on a wide variety of plants. They found that applications of radiation produced mutations in plant genomes, creating plants that were different from the original. The Rio Star grapefruit was developed when Texas scientist Richard Hensz irradiated Ruby Red grapefruit seeds with X-rays. The new grapefruit had darker flesh and greater resistance to cold, which helped it survive a severe freeze in 1983 that killed other grapefruit trees. Since the 1940s, thousands of other crops have been produced with mutagenesis.

In the 1920s and 1930s, scientists explored the effect of radiation on a wide variety of plants. They found that applications of radiation produced mutations in plant genomes, creating plants that were different from the original. The Rio Star grapefruit was developed when Texas scientist Richard Hensz irradiated Ruby Red grapefruit seeds with X-rays. The new grapefruit had darker flesh and greater resistance to cold, which helped it survive a severe freeze in 1983 that killed other grapefruit trees. Since the 1940s, thousands of other crops have been produced with mutagenesis.

As molecular techniques in biology became available around the 1970s, scientists began to look more precisely at ways to alter genes in plants. RNA interference techniques allow scientists to switch off genes coding for undesired traits precisely, while recombinant DNA techniques allow them to insert genes coding for desired traits precisely. Other than allowing more precision in genetic modification, these molecular techniques also open up the possibilities of using genes from other species.

Risk v Benefits

Although we want to highlight the similarities and differences in various genetic modification processes, it is typically more illustrative to focus on the risks and benefits of specific genetically-modified crops and foods, instead of supporting or fighting over one genetic modification process versus another.

Another complexity is that gene editingvia CRIPSR or other techniques is fairly new. Every gene edited plant currently in the regulatory pipeline has been created by only deleting specific genes. This means that currently, gene edited plants do not have any so-called foreign or modified DNA in them. For example, a high-fiber wheat is being developed that would contain three times the amount of fiber in standard white flour, and contains no DNA from other species. However, it is also possible to modify or insert foreign genes using gene editing. Modification and insertion of genes will likely be regulated differently than deletions.

There is evidence that because gene editing mimics widely accepted and time-tested techniques such as mutagenesis, it will face minimal regulations. That’s so far true in North America and in about a dozen other countries around the world, but not in Europe. The EU regulates GMOs and gene editing the same, under legislation that dates to the early 2000s that most scientists consider outdated.

It is important to remember that the evidence suggests that none of these techniques pose any danger to humans, farmers or consumers. Unintended effects as a result of genetic modification—whether the conventional kind or more recent biotech versions such as GMOs or gene editing—are extremely rare, mostly because of the extensive amount of back-crossing that occurs in all types of genetic modification processes, traditional or biotech.

Backcrossing is when a genetically modified crop is crossed with the unmodified crop over multiple generations. The goal of backcrossing is to obtain a line as identical as possible to the unmodified original crop, with only the addition of the gene of interest. After 6 crosses, the resulting plant is 99.22% genetically identical to the unmodified crop. Therefore, although it is important to understand the differences between genetic modification processes, the risk of unintended consequences of genetic modification of crops and food that make it to market remains extremely low.

It is easy to oversimplify issues surrounding genetic modification techniques. However, a deep and nuanced understanding of the science and technology underlying biotechnology is critical for successful decision-making and policy-making related to crop genetic modification.

Kayleen Schreiber is the GLP’s infographics and data visualization specialist. She researched and authored this series as well as creating the figures, graphs, and illustrations. Follow her at her website, on Twitter @ksphd or on Instagram @ksphd

The biodiversity that is crucial for our food and agriculture is disappearing by the day

The biodiversity that is crucial for our food and agriculture is disappearing by the day

FAO launches the first-ever global report on the state of biodiversity that underpins our food systems

22 February 2019, Rome – The first-ever report of its kind presents mounting and worrying evidence that the biodiversity that underpins our food systems is disappearing – putting the future of our food, livelihoods, health and environment under severe threat.

Once lost, warns FAO’s State of the World’s Biodiversity for Food and Agriculture report, launched today, biodiversity for food and agriculture – i.e. all the species that support our food systems and sustain the people who grow and/or provide our food – cannot be recovered.

Biodiversity for food and agriculture is all the plants and animals – wild and domesticated – that provide food, feed, fuel and fibre. It is also the myriad of organisms that support food production through ecosystem services – called “associated biodiversity”. This includes all the plants, animals and micro-organisms (such as insects, bats, birds, mangroves, corals, seagrasses, earthworms, soil-dwelling fungi and bacteria) that keep soils fertile, pollinate plants, purify water and air, keep fish and trees healthy, and fight crop and livestock pests and diseases.

The report, prepared by FAO under the guidance of the Commission on Genetic Resources for Food and Agriculture looks at all these elements. It is based on information provided specifically for this report by 91 countries, and the analysis of the latest global data.

“Biodiversity is critical for safeguarding global food security, underpinning healthy and nutritious diets, improving rural livelihoods, and enhancing the resilience of people and communities. We need to use biodiversity in a sustainable way, so that we can better respond to rising climate change challenges and produce food in a way that doesn’t harm our environment,” said FAO’s Director-General José Graziano da Silva.“Less biodiversity means that plants and animals are more vulnerable to pests and diseases. Compounded by our reliance on fewer and fewer species to feed ourselves, the increasing loss of biodiversity for food and agriculture puts food security and nutrition at risk,” added Graziano da Silva.

The foundation of our food systems is under severe threat

The report points to decreasing plant diversity in farmers’ fields, rising numbers of livestock breeds at risk of extinction and increases in the proportion of overfished fish stocks.
Of some 6,000 plant species cultivated for food, fewer than 200 contribute substantially to global food output, and only nine account for 66 percent of total crop production.

The world’s livestock production is based on about 40 animal species, with only a handful providing the vast majority of meat, milk and eggs. Of the 7,745 local (occurring in one country) breeds of livestock reported globally, 26 percent are at risk of extinction. Nearly a third of fish stocks are overfished, more than half have reached their sustainable limit.

Information from the 91 reporting countries reveals that wild food species and many species that contribute to ecosystem services that are vital to food and agriculture, including pollinators, soil organisms and natural enemies of pests, are rapidly disappearing. For example, countries report that 24 percent of nearly 4,000 wild food species – mainly plants, fish and mammals – are decreasing in abundance. But the proportion of wild foods in decline is likely to be even greater as the state of more than half of the reported wild food species is unknown.

The largest number of wild food species in decline appear in countries in Latin America and the Caribbean, followed by Asia-Pacific and Africa. This could be, however, a result of wild food species being more studied and/or reported on in these countries than in others. Many associated biodiversity species are also under severe threat. These include birds, bats and insects that help control pests and diseases, soil biodiversity, and wild pollinators – such as bees, butterflies, bats and birds.

Forests, rangelands, mangroves, seagrass meadows, coral reefs and wetlands in general – key ecosystems that deliver numerous services essential to food and agriculture and are home to countless species – are also rapidly declining.

Leading causes of biodiversity loss 

The driver of biodiversity for food and agriculture loss cited by most reporting countries is: changes in land and water use and management, followed by pollution, overexploitation and overharvesting, climate change, and population growth and urbanization.

In the case of associated biodiversity, while all regions report habitat alteration and loss as major threats, other key drivers vary across regions. These are overexploitation, hunting and poaching in Africa; deforestation, changes in land use and intensified agriculture in Europe and Central Asia; overexploitation, pests, diseases and invasive species in Latin America and the Caribbean; overexploitation in the Near East and North Africa, and deforestation in Asia.

Biodiversity-friendly practices are on the rise

The report highlights a growing interest in biodiversity-friendly practices and approaches. Eighty percent of the 91 countries indicate using one or more biodiversity-friendly practices and approaches such as: organic agriculture, integrated pest management, conservation agriculture, sustainable soil management, agroecology, sustainable forest management, agroforestry, diversification practices in aquaculture, ecosystem approach to fisheries and ecosystem restoration.

Conservation efforts, both on-site (e.g. protected areas, on farm management) and off-site (e.g. gene banks, zoos, culture collections, botanic gardens) are also increasing globally, although levels of coverage and protection are often inadequate.

Reversing trends that lead to biodiversity loss – what is needed

While the rise in biodiversity-friendly practices is encouraging, more needs to be done to stop the loss of biodiversity for food and agriculture. Most countries have put in place legal, policy and institutional frameworks for the sustainable use and conservation of biodiversity, but these are often inadequate or insufficient.

The report calls on governments and the international community to do more to strengthen enabling frameworks, create incentives and benefit-sharing measures, promote pro-biodiversity initiatives and address the core drivers of biodiversity loss.

Greater efforts must also be made to improve the state of knowledge of biodiversity for food and agriculture as many information gaps remain, particularly for associated biodiversity species. Many such species have never been identified and described, particularly invertebrates and micro-organisms. Over 99 percent of bacteria and protist species – and their impact on food and agriculture – remain unknown.

There is a need to improve collaboration among policy-makers, producer organizations, consumers, the private sector and civil-society organizations across food and agriculture and environment sectors.

Opportunities to develop more markets for biodiversity-friendly products could be explored more.
The report also highlights the role the general public can play in reducing pressures on biodiversity for food and agriculture. Consumers may be able to opt for sustainably grown products, buy from farmers’ markets, or boycott foods seen as unsustainable. In several countries, “citizen scientists” play an important role in monitoring biodiversity for food and agriculture.

Examples: impacts of biodiversity loss and biodiversity-friendly practices

  • In The Gambia, massive losses of wild foods have forced communities to turn to alternatives, often industrially produced foods, to supplement their diets. 
  • In Egypt, rising temperatures will lead to northwards shifts in ranges of fish species, with impacts on fishery production.
  • Labour shortages, flows of remittances and increasing availability of cheap alternative products on local markets have contributed to local crops abandonment in Nepal.
  • In the Amazonian forests of Peru, climatic changes are predicted to lead to “savannization”, with negative impacts on wild foods’ supply.
  • Californian farmers allow their rice fields to flood in winter instead of burning them after growing season. This provides 111,000 hectares of wetlands and open space for 230 bird species, many at risk of extinction. As a result, many species have begun to increase in numbers, and the number of ducks has doubled.
  • In France, about 300,000 hectares of land are managed using agroecological principles. 
  • In Kiribati, integrated farming of milkfish, sandfish, sea cucumber and seaweed ensures regular food and income as despite changing weather conditions, at least one component of the system is always producing food.
Scientific Research necessary for precision breeding for sustainable agriculture

Scientific Research necessary for precision breeding for sustainable agriculture

On 24 October, Science for Democracy Leading has endorsed a position paper that calls upon European policy makers to safeguard innovation in plant science and agriculture. The document is signed by scientists representing more than 85 European plant and life sciences research centers. Science for Democracy shares the concerns about a decision of the European Court of Justice on modern genome editing techniques that could lead to a de facto ban of innovative crop breeding.

European farmers may be deprived of a new generation of more climate resilient and more nutritious crop varieties that are urgently needed to respond to current ecological and societal challenges. Science for Democracy has addressed some of these issues in its position paper on the Horizone Europe draft program and supports statements of European research institutes that appeared online over the last months, this statement is proof of a solid consensus among the academic life science research community in Europe on the negative consequences of this ruling.

On 18 September in Rome and 4 October in Milan Science for Democracy and the Associazione Luca Coscioni promoted public CRISPR Snack to request clarification from the Italian Government on the domestication of the ECJ decision. So far the Conte Government has not responded.