To cope with climate change and population growth, the continent urgently needs more home-grown researchers, argue Anagaw Atickem, Nils Chr. Stenseth and colleagues.
Africa’s population is projected to nearly quadruple over the next century. And that is following a staggering increase over just seven decades — from 200 million people in 1950 to 1.25 billion in 2018. Meanwhile, temperatures across the continent are expected to rise by between 3 °C and 4 °C over the next century, bringing more drought, flooding, conflict and species loss.
To face these formidable challenges, Africa must improve its capabilities in higher education and research. Yet the quality of the scientific education provided at many universities on the continent has, if anything, deteriorated over the past two decades.
Read the rest of the article on Nature.
On May 26th, 2019, on the occasion of the International Day Against Drug Abuse and Illicit Trafficking, the United Nations Office on Drugs and Crime (UNODC) presented the 2019 World Drug Report.
Science for Democracy’s Coordinator Marco Perduca, who together with Guido Long followed the presentation, noted that “the important methodological novelty is in data collection – for example from Nigeria – which resulted in estimates’ increases. If and when all of Africa and all of Asia follow suit, we will have the “surprise” of discovering that the consumption of illicit substances is a cultural phenomenon that concerns way more people than the 271 million declared today, who represent a 30% increase on ten years ago.
With the increase in use of substances managed by criminal organisations, problematic use also increases from 30.5 to 35 million at global level. The number of deaths also increases, to 585,000, even if it’s not clear whether they are linked to overdoses or contributory causes.
The most common substance remains cannabis, with 188 million consumers. The increase in opioids (even legal ones) users is striking: 56% compared to the estimate for 2016.
Unfortunately, the relationship between drugs and blood diseases also doesn’t decrease: 1.4 million (out of a total of 11) people who inject drugs live with HIV, while 5.6 live with Hepatitis C and 1.2 with both.
The theme of this year’s international day was “health for justice and justice for health”, though the disastrous impacts on the health of who is prosecuted, if not persecuted, for drug-related crimes is not acknowledged in the report. Much more is still invested in the securitarian and penitentiary aspect rather than the socio-sanitary one. All of this despite the increase in consumption and in particular problematic consumption. And yet, the Report ends with a recommendation about the necessity of providing help to those in need (only one in 7 people who need support gets it).
The document also refers to the danger of risk concentration (for example in prisons), but also the so-called “too much and not enough” paradox, relative to the medical use of painkillers and the prevalence of opioids for non-medical use. As was already the case in the International Narcotics Control Board’s report, there are also several pages dedicated to the legal cannabis market and the presence of criminality also in countries where it was legalised, and the necessity to closely monitor these situations.”
Marco Perduca concludes: “the possibility that from later this year the leadership of UNODC moves from the hands of a Russian to those of a Chinese does not bode well for a future in which the UN start assessing the impact of “international drug control” measures on health and the administration of justice, as well as on the environment. It is necessary that Europeans, both individually and as EU, coordinate a response that on the one hand counters the sino-russo-arab policing and punitive front, and on the other does not get in the way of the WHO recommendation to reschedule cannabis in light of its ‘therapeutic potential’.”
If you are interested in reading more about the presentation you can visit the CND Blog by the International Drug Policy Consortium.
Global Commission on Drug Policy calls for a reclassification of drugs including cocaine, heroin and cannabis
Illegal drugs including cocaine, heroin and cannabis should be reclassified to reflect a scientific assessment of harm, according to a report by the Global Commission on Drug Policy.
The commission, which includes 14 former heads of states from countries such as Colombia, Mexico, Portugal and New Zealand, said the international classification system underpinning drug control is “biased and inconsistent”.
A “deep-lying imbalance” between controlling substances and allowing access for medicinal purposes had caused “collateral damage”, it said. Such damage included patients in low- and middle-income countries forced to undergo surgery without anaesthetic, to go without essential medicines and to die in unnecessary pain due to lack of opioid pain relief.
Other negative consequences were the spread of infectious diseases, higher mortality and the global prison overcrowding crisis, the report said.
“The international system to classify drugs is at the core of the drug control regime – and unfortunately the core is rotten,” said Ruth Dreifuss, former president of Switzerland and chair of the commission. She called for a “critical review” of the classification system, prioritising the role of the World Health Organization (WHO) and scientific research in setting criteria based on harms and benefits.
Restrictions on milder, less harmful drugs should also be loosened, the commission said, to include “other legitimate uses”, including traditional, religious or social use.
Some illegal drugs, including cocaine, heroin, cannabis and cannabis resin, were evaluated up to 30 years ago or have never been evaluated, Dreifuss said, which seriously undermines their international control.
Asked whether these drugs should be reclassified, Juan Manuel Santos, the former president of Colombia, replied “yes”. “The scientific basis is non-existent,” Santos told journalists at an online briefing to discuss the commission’s report.
“It was a political decision. According to the studies we’ve seen over past years, substances like cannabis are less harmful than alcohol,” he said. “I come from Colombia, probably the country that has paid the highest price for the war on drugs.”
After 50 years, the war on drugs has not been won, Santos said. It had caused “more damage, more harm” to the world than a practical approach that would regulate the sale and consumption of drugs in a “good way”.
The WHO estimated in 2011 that 83% of the world’s population lived in countries with low or non-existent access to opioid pain relief.
The commission’s recent report looks into how “biased” historical classification of substances, with its emphasis on prohibition, has contributed to the world drug problem. Under the current system, in place since 1961, decisions on classifying drugs are taken by the Commission on Narcotic Drugs (CND), a body of UN member states established by the UN Economic and Social Council. The WHO Expert Committee on Drug Dependence provides recommendations to the CND. However, the recommendations are then voted on by the CND members, leaving them open to political decisions.
Helen Clark, the former prime minister of New Zealand, said the WHO should make decisions on drug classification based on health and wellbeing. More harmful drugs would require a higher level of intervention, she said.
“The international community should recognise that the system is broken,” said Clark. “They should recognise the inconsistencies and it should trigger a review.”
Risk thresholds, such as those used for alcohol, should be used for illegal drugs rather than the “absolute precautionary principle”, she said.
The commission called on the international community to move towards the legal regulation and use of drugs. In January, the WHO recognised the medical benefits of cannabis and recommended it be reclassified worldwide.
Michel Kazatchkine, French physician and former executive director of the Global Fund to Fight Aids, Tuberculosis and Malaria, said that 75-80% of the global population do not have access to medicines and “all of the reasons are linked to repression and prohibition-based control systems”.
“These restrictive policies under international control have been impeding and are continuing to impede medicines that are not only needed, but are on the WHO list of essential medicines.”
He said a “crisis of regulation” in the US had led to the “dreadful consequences” of the opioid crisis, as a result of which 72,000 people died in 2017.
“We need to think of these things with a fresh outlook,” said Anand Grover, the former special UN rapporteur for health, India. “We can’t go with the cultural biases of the west.”
Credit: The Guardian
The Financial Times published a letter penned by our coordinator Marco Perduca and programme officer Guido Long on the recent news in the UK that some Conservative leadership candidates have used drugs in the past.
Below is the full text of the letter:
The story dominating the weekend news that Conservative leadership hopeful Michael Gove used cocaine more than 20 years ago appears wholly out of place. In particular, the fact that he and others, such as Rory Stewart, had to apologise for what is a purely private behaviour (in the case of Mr Stewart he wasn’t even in the UK) had a surreal feel.
Instead of discussing their solutions to what has been dubbed the “biggest issue in a generation”, the housing crisis, or environmental degradation and climate change, candidates to be the next PM spent their weekend explaining which drugs they took and why. Home secretary Sajid Javid, whose briefing includes the subject matter, declared that simply using drugs is a crime and hinted that who does is complicit in murder.
As is too often the case, nobody looks at the evidence. Yes, drugs can be very harmful and cause a supply chain of pain, but they do so mostly because they are illegal. The fact that most candidates to lead the country have used drugs in the past shows that mere drug use doesn’t ruin one’s life. It is always the most disadvantaged members of society who suffer the most.
We would expect leadership candidates to detail their plans for a sensible drug policy, instead of detailing how sensible (or not) their youth choices were.
Report on the workshop, by Marco Perduca and Claudio M. Radaelli
What is the future of evidence-based policy? Does a new generation of evidence-based policy initiatives exist, and if so, how should we call it, evidence-based policy 2.0? What is the difference with 1.0? We addressed these questions at an international workshop (8 April 2019) hosted by the Global Governance Institute at University College London, School of Public Policy, funded by Science for Democracy – Associazione Luca Coscioni – with a contribution by the Project Procedural tools for effective governance, Protego (European Research Council’s Advanced Grant).
The workshop was organised around a set of round-tables – each round table with its distinctive set of questions. The participants conveyed nuanced ideas and reported on a range of empirics that at least on some issues do not point to a single conclusion. Aware of the diversity of opinions and approaches, we, the authors of this report, want nevertheless to draw what seem to us the most important lessons from the workshop.
To kick off, consider the following, admittedly blunt proposition: the old generation of evidence-based policy initiatives (typified in the UK by the Blair’s government enthusiasm for this concept) was about the notion of using science and evidence to fill in the information deficit of the decision-makers. Key to evidence-based policy 1.0 was the notion that evidence (from the natural sciences, risk assessment, economics, randomized control trails, and so on) would reduce uncertainty in policy choice. Although bounded rationality was already known since the 1950s, the first wave of evidence-based policy failed to take into consideration the way we think. Hence the biases of decision-makers were not part of the equation.
The causal, then, arrow was supposed to work more or less like this:
EVIDENCE -> REDUCTION OF UNCERTAINTY -> IMPROVED DECISIONS 
Several empirical studies have documented the limitations (if not failure) of the model portrayed in . The problem is that the policy process features ambiguity in addition to uncertainty – the latter is defined as changing definitions of the policy problem, variation over time on the venues where the search for alternatives is carried out, and actors that come and go in the different venues, hence ambiguity implies instability of the network of actors. Following Paul Cairney and others, ambiguity cannot be eliminated, it is a key characteristic of the policy processes in democratic systems. Neither are policymakers usually looking for a simple yes/no solution to which a piece of evidence can readily provide an answer.
To be credible, the agenda for evidence-based policy 2.0 – we submit – should put forward propositions that apply to a world where both uncertainty and ambiguity are present. Empirically, a sensible agenda for evidence-based policy 2.0 should take into consideration the differences in preferences between politicians and bureaucrats – ‘decisions’ do not come out of a black box, but are the product of the nexus connecting public managers and their political masters. A fundamental lesson drawn from the behavioural sciences is that politicians and bureaucrats, like all humans, have a brain that operates in different modes, is influenced by well-known biases, and is constrained by bounded rationality. The same sciences that have shown the range of biases and heuristics also point to possible ways to de-bias decision-making processes. In short, we are more aware of what happens in a world of bounded rationality, and have learned about de-biasing. Evidence-based policy 2.0 also models the incentives and preferences of scientists and decision-makers, meaning that both scientists and decision-makers are endogenous to the explanation.
Conceptually, this agenda should be sensitive to the importance of mechanisms operating in specific political and administrative contexts. The mechanisms are the WHY of the explanation, they tell us why certain things happen or do not happen in evidence-based policy processes. These mechanisms are not the same everywhere, every time. Indeed, they operate in specific contexts where governance is modelled around relations between elected politicians and public organizations (such as government departments and regulatory agencies). Further, today the problem is less one of information deficit and more one of information surplus, or how to direct attention in a world where information has low cost and is available, albeit its quality may differ greatly.
To wrap up, the three important points for the evidence-based policy 2.0 agenda are: (1) there is ambiguity as well as uncertainty in the public policy processes (2) these processes feature various types of linkages between evidence and decisions, in different settings, with a realistic model of how the brain of decision-makers works and its biases; and (3) there is a high ratio of noise to signal, or surplus of information.
The arrows of evidence-based policy 2.0 are represented in :
SCIENCE AND EVIDENCE -> DECISION-MAKING PROCESS = Function of (UNCERTAINTY + AMBIGUITY) -> MECHANISMS IN CONTEXT-> REAL-WORLD POLITICIANS AND BUREAUCRATS MAKE DECISIONS 
One way to present the findings of our workshop is that collectively, as a group, we tried to put flesh on the bare bones of the causal relationship . Another exercise is to take a critical look at the bones, and then re-think about the flesh. In fact, the correct causal bones may not be the ones portrayed in . It is fair to argue that public policy and social norms shape the kind of science and evidence that is allowed or not allowed to feed into the decision-making process. Further, we know that systems like ‘science’ ‘society’ ‘law-making’ follow their internal logic, whilst the arrows make us think of smooth or at least logical sequences. Following Boswell and Smith, we can think of four models of research-and-policy interactions: (a) research, science and evidence are used to make public decisions (b) political power and social norms shape knowledge (c) co-production of socially-relevant knowledge in the spheres of research and governance; and (d) research and policy are definitively autonomous worlds. All approaches deserve attention, particularly at a moment when governments design policies and funding mechanisms for universities based on ‘impact of research’. These policies should not presuppose simplistic understandings of concepts like ‘impact’ and ‘utilization of knowledge by policy-makers’ – an example being the Research Excellence Framework (REF) in the UK. The risks are to misallocate funding and to give the wrong incentives to researchers.
MODELING ACTORS – Consider the arrows in . We see different actors, namely scientists, politicians, and bureaucrats. At a minimum we should model these actors. What do they want? Decades of research in public management and political science have informed us of the different preferences of politicians and bureaucrats. They want different things: consensus and votes for politicians, task expansion, reputation and standard operating procedures for public managers. But it is not just a question of preferences. There are also social norms and emotions. Whether we look at how organizations learn, the logic of negotiating truth in science and public policy, or at field experiments the message is that emotions carry explanatory leverage when it comes to the delivery of evidence-based policy. Thus evidence-based policy 2.0 should accommodate both the logic of incentives and the logic of emotions – at a higher conceptual level, choice and appropriateness, in a context of bounded rationality, heuristics and biases. Finally, no matter what the logic of interests and emotions tells us, there is the hard ceiling (for evidence to make an impact into policy) of organizational capacity.
OF SCIENCE AND SCIENTISTS – And yet, we have not said a word about the other actor, the scientist. Here science and technology studies provide their lessons. Although we assume that evidence-based policy 1.0 is typical of naïve policy-makers, the same naïve belief may exist in the mind of scientists when they discount the complexity (as well as the values) of public decision-making. If we say that all scientists have to do is to speak the truth to power, we cover only a fraction of the evidence-based policy 2.0 picture. As research on policy learning has demonstrated, the speaking-the-truth-to-power attitude of the scientist brings failure given certain characteristics of the policy process. It can work when the policy process approaches the conditions of epistemic learning: but it delivers much less as soon as we enter bargaining, authority, or a level-playing field between lay and professional knowledge.
More fundamentally, speaking the truth to power does not tell us anything about the preferences of scientists. They care about truth and science, of course. But they also care about their reputation and funds for their institutes and projects. This is not necessarily a bad thing, of course. Actually, in some circumstances being dependent on funding from policy makers can have a good effect. One can argue that researchers who need to compete for funding from policy makers and bureaucracies have a better understanding of the policy process and the needs of their clients – they have to, in order to get funding.
Some scientists pursue their preferences by talking up science. Some of us pointed to cases when scientists oversell. They do so because they want more prestige and want to perforate the veil of communication with public opinion and decision-makers. The phenomenon may not entail anything wrong: a climate scientist with information about seasonal forecasts sees the importance of this information and is puzzled why it is not used to a larger extent. A policy maker may not quite understand how to use this information. Thus, the scientist keeps pushing with the evidence on the table. Is this really overselling?
There is also an issue about communication. Communicating the bounds of knowledge in the language of probability is correct. It mitigates the tendency to overselling. This is the territory of probability, sensitivity analysis and the language of incredible certitude. Scientists should adopt the language of humble science, prudence, and openness to conjectures and confutations. And yet, other participants asked: how exactly will being humble and speaking the language of probability contribute towards success in conveying the climate change challenge that we face? How can this approach meet the logic of communication in a world of fast, succinct social media?
We settled on the following proposition: Science can help policymakers make sense of their own ambiguity but they have to accept their own uncertainty.
Further, where does communication takes place? There are venues other than social media, such as deliberative and participatory settings. Although there is a lot of talking about the loss of trust in experts, deliberative and participatory policy experiments suggest that ordinary citizens may benefit the dialogue with scientists given the correct scope conditions. The conditions for public engagement as means to increase or restore public trust in science and experts are: to avoid self-selection (that is, only the already knowledgeable and educated citizens participate), to calibrate engagement so that citizens can effectively develop their knowledge during citizens-experts panels, and to avoid domination. Crucial is the coupling between deliberative and institutional fora. Engagement deteriorates in quality and participation over time, unless the results of the engagement feed into the decision-making process. Co-production of research with the stakeholders is a collaborative model often presented as a template. But some argue that coproduction has many hidden costs, which are unequally borne by participants.
Finally, we often think of science as something public, done in universities and public institutions, publicly funded labs for example. But today a lot of science is commercial. The scientific enterprise is carried out in private settings by company labs. In a post-industrial economy, the private sources of research and development is inevitable and not problematic in itself. What is problematic is the accountability issue, for example the failure of pharma companies to report negative findings. Of course, failure to publish negative results is not unique to the private sector, but it is a problem given financial implications for coverage. Other participants observed that when it comes to trust and accountability, the issue is not necessarily related to whether for example a research institute is privately owned or not, citing examples from Scandinavia.
UNCERTAINTY AND USAGES – The effects of uncertainty on science and public decisions are asymmetrical. Uncertainty is precious in science, it is the trigger of the scientific enquiry, it is always there in processes of scientific discovery and scientific enquiry. In a sense for a scientist more uncertainty in a given domain is a good thing, it means that there is a lot of promising research that can be done. For policy-makers, instead, uncertainty is, so to speak, ‘bad’. Policy-makers do not want to follow arguments cast in the logic of uncertainty. When this asymmetry is coupled with ambiguity, the scene is set for multiple usages of science in public decisions. Science can be used INSTRUMENTALLY to improve on policies, or POLITICALLY to improve on popularity, elections, visibility, campaigns, and so on. Governments adopt reforms that have higher expected political payoffs rather than those with higher instrumental value. However, if one wants to reform and use science instrumentally, one has to be aware of the political feasibility of the reform. Consequently, not always do instrumental and political considerations clash, they can also be complementary.
Science can also be deployed SYMBOLICALLY to put a veneer of ‘scientific’ justification on decisions. This is a kind of back-of-the-envelope, justificatory science. For this reason, the evaluation of evidence used in public decision-making processes should be as pluralistic as possible. A sort of society-wide review of the scientific basis of public decisions (coming from different institutes and think tanks) and citizens mobilized to defend and extend their right to science are important. On the first point (that is, wide societal and pluralistic review), regulators and governments should assist with funds institutes and think tanks to carry out their own autonomous review of the evidence used by regulatory agencies and lawmakers, at least in cases of major controversial regulations. This idea was originally discussed in the USA by Resources for the Future, but it could be applied to the European Union. On the second point, the examples of Sense about Science and Science for Democracy show how advocacy for the campaign for the right to science may work in Europe and at the level of the United Nations.
SUCCESS? Whether we call is evidence-based or evidence-inspired policy, we must be clear on the goal we have in mind. There are fundamental dimensions of success:
(a) In INFLUENCING policy makers
(b) On the SUBSTANCE of policy. The policy-makers may ‘successfully learn’ the wrong lesson by considering the weaker scientific argument because it is close to their ideology, and not learn the correct lesson. Clearly, this is not successful evidence-based policy in terms of substance, although the decision-makers, in this case, have been definitively ‘influenced’ by science.
(c) Success on preventing wrong choices, and more generally success as REACTIVE mode
(d) Success as PROACTIVE mode, in leading towards the right choice
Although there is no hard evidence, the literature seems to point more frequently to success in reactive mode – that is, cumulative evidence assists when failure of existing decisions or non-decisions is wide-spread. The challenge is to generate success in proactive mode and in science-based issues.
Finally, there is the problem of documenting success. Arguably, there is a publication bias towards documenting more failure than success. Of course, studying the inefficiencies and limitations of the use of science in public decisions is instructive. Scientists embrace a critical and sceptical thinking of what the government does. For public managers the incentive to document success is instead visible: they need to collate and show success to be promoted in their career, to show how they spend their budget, to report on how well their country is doing within international organisations. The two worlds operate with different biases, and we cannot simply average out the two biases – of social scientists and policy makers. For sure, social scientists should correct their bias – possibly encouraged by the choices made by editorial committees of the main outlets for policy research, such as policy research journals.
SUPPLY AND DEMAND – We often focus on the supply of evidence and how it should be considered by decision makers as well as the public. But what about the demand side? In terms of design, it is useful to think of ways in which advocacy organizations such as Science for Democracy can put pressure on politicians and regulators, make it costly to ignore evidence, and make them more likely to demand science. Procedural regulatory instruments make public administration accountable to science (broadly conceived) by design. Examples are the obligation to consult experts, to carry out and publish risk assessment, to provide estimates and sensitivity analyses on the impact on the environment of legislative and regulatory policy proposals, to use or not use a given discount rate and value-for-life estimates in policy formulation, to rely on objective counterfactual analysis in the evaluation of policy programs. These instrumentations for ‘accountability by design’ are examined in the Protego project for the EU-28 and the EU. Further, deliberative exercises that increase public awareness and interest in science would not be ignored by politicians. Transparency reviews put pressure on decision-making. Official statistics should be framed and addressed as public goods, and protected as such.
UNDERSTANDING OF SCIENCE, UNDERSTANDING OF POLICY PROCESSES – Considerable efforts have been done in increasing the public understanding of science. One important goal in these efforts is to raise awareness of science among politicians and bureaucrats. However, these actors do not necessarily have truth and knowledge as their priority. For this reason a new generation of efforts should be directed in raising the scientists’ awareness of the fundamental variables at play in the policy process and modes of learning in public policy. In short, after having tried to explain science to politicians and regulators, the social scientists should also empower natural scientists by explaining them how policy processes vary depending on key variables. This can be done by condensing our knowledge of policy processes into formats and presentations (someone said ‘tablets’) with high potential for dissemination. It also requires a new commitment of social scientists to judge the quality of their research in terms of how many audiences it can reach, beyond the community of other social scientists. This vision has been called translational social science, but has many roots, such as evidence use, research uptake, knowledge mobilisation and meta-science. Whatever our backgrounds, scientists need to be cautious about how, when and whether to engage, and to ensure they are using evidence-informed techniques to do so.
Here you can find the list of Participants and themes of the Round tables.
Male animal bias is unjustified and can lead to drugs that work less well for women
The male mind is rational and orderly while the female one is complicated and hormonal. It is a stereotype that has skewed decades of neuroscience research towards using almost exclusively male mice and other laboratory animals, according to a new study.
Scientists have typically justified excluding female animals from experiments – even when studying conditions that are more likely to affect women – on the basis that fluctuating hormones would render the results uninterpretable. However, according to Rebecca Shansky, a neuroscientist at Northeastern University, in Boston, it is entirely unjustified by scientific evidence, which shows that, if anything, the hormones and behaviour of male rodents are less stable than those of females.
Shansky is calling for stricter requirements to include animals of both sexes in research, saying the failure to do so has led to the development of drugs that work less well in women.
“People like to think they’re being objective and uninfluenced by stereotypes but there are some unconscious biases that have been applied to how we think about using female animals as research subjects that should be looked at by scientists,” she said.
The male bias is seen across all fields of pre-clinical research, but one of the starkest areas is neuroscience, in which male animals outnumber females by nearly six to one. And considering the brain through a “male lens” has had public health implications, according to Shansky’s article, published in the journal Science.
In one recent example, the sleeping drug Ambien, which had been tested in male animals and then men in clinical trials, was later shown to be far more potent in women because it was metabolised more slowly in the female body. Across all drugs, women tended to suffer more adverse side effects and overdoses.
Major depression and post-traumatic stress disorder are twice as prevalent in women, but tests designed to mimic their symptoms in rodents are typically developed and validated in males. Shansky’s work shows male and female rodents can behave differently in such experiments, which could provide new insights into these conditions.
Recent research has challenged the reasoning behind using almost exclusively male animals, with one analysis of nearly 300 neuroscience studies revealing that data collected from female mice was not more variable than that from males – in fact, for some measures, the reverse was true.
Female rodents have a four- to five-day reproductive cycle, during which oestrogen and progesterone increase roughly fourfold. However, male mice housed together establish a dominance hierarchy in which the circulating testosterone levels in the dominant males are, on average, five times as high as the subordinates.
This evidence has led the US National Institutes of Health and the Canadian Institutes of Health Research to introduce mandates in 2016 to include both sexes in research. However, major UK funders such as the Wellcome Trust and the Medical Research Council have yet to introduce any similar requirements. “Now that the US and Canada have made these mandates it’s time for Europe to step up,” said Shansky.
She is also concerned about the approach taken by some research teams in the US which incorporate both sexes in experiments by working things out in males first and then repeating it in females. “It perpetuates the dated, sexist and scientifically inaccurate idea that male brains are a standard from which female brains deviate,” she said.
Ironically, Shansky said, the ways in which the male and female brains differ may have remained under-investigated due to a backlash against the idea of there being meaningful differences between the male and female brain.
“There’s a concern that research that shows sex differences in the brain will be weaponised by misogynists or used to justify and promote inequality,” she said. “It’s up to scientists to make sure that the message of those studies is not conveyed in a comparative way that adds any value. It doesn’t have to be a competition, it’s not about being better, it’s just about saying this is how things works.
“There’s nothing anti-feminist about saying the neurobiology in the female brain might be different.”
Source: The Guardian
Our coordinators Marco Perduca and Marco Cappato just came back from Addis Ababa, where they visited different institutions and met with local partners, in view of organising the 6th World Congress for Freedom of Scientific Research there in February 2020.
You can read the Presentation of the 6th meeting of WCFSR to find out why Science for Democracy decided to organise the next World Congress in Ethiopia and some of the themes that will be discussed.
After the Crispr snack of 5 March 2019, the Belgian Food Safety Authority questioned Science for Democracy’s coordinators Marco Cappato and Marco Perduca for over three hours. While there has been no notification of any sanction yet (criminal or administrative), the Agency has sent a recap file, which includes the first report, the list of confiscated material, the audition report and pictures of the event taken by agents in front of the European Parliament. Enjoy the reading (in French) and, should you wish to support Science for Democracy, here is how you can donate! Thank you!
Hopes of new treatments after research uncovers genes essential to disease’s survival
Researchers working with a revolutionary gene editing tool have discovered thousands of genes that are essential for the survival of cancer cells, holding out the prospect of major advances in treatment.
Scientists from the Wellcome Sanger Institute in Cambridgeshire worked with the Crispr/Cas9 system to disrupt every gene within 30 different types of cancer.
This led them to identifying 600 genes that could be used in precision treatments that would mean sufferers not having to endure the side effects of options such as chemotherapy and radiotherapy.
One gene identified is Werner syndrome RecQ helicase, which researchers found was essential for keeping alive some of the most unstable cancers but which cannot currently be targeted.
The research, which was a collaboration between the Wellcome Sanger Institute, the European Bioinformatics Institute and the pharmaceutical company GlaxoSmithKline, was published in the journal Nature on Wednesday.
It could help to bring down the cost of making effective cancer treatments: the institute said that it currently costs more than $1bn (£760m) to make a single drug, and that 90% of these fail during testing and development.
Praising the tool that made the breakthrough possible, Dr Kosuke Yusa, the co-lead author of the findings, said Crispr was “incredibly powerful” and “enables us to do science at a scale and with a precision that we couldn’t do five years ago.
“With Crispr we have discovered a very exciting opportunity to develop new drugs targeting cancers.”
Dr Mathew Garnett, also co-lead author, said: “The Cancer Dependency Map is a huge effort to identify all the weaknesses that exist in different cancers so we can use this information to empower the next generation of precision cancer treatments.
“Ultimately we hope this impacts on the way we treat patients, so many more patients get effective therapies.”
Prof Karen Vousden, Cancer Research UK’s chief scientist, told the BBC: “What makes this research so powerful is the scale.
“This work provides some excellent starting points and the next step will be a thorough analysis of the genes that have been identified as weaknesses in this study, to determine if they will one day lead to the development of new treatments for patients.”
Credit: The Guardian
Japan will allow gene-edited foodstuffs to be sold to consumers without safety evaluations as long as the techniques involved meet certain criteria, if recommendations agreed on by an advisory panel yesterday are adopted by the Ministry of Health, Labour and Welfare. This would open the door to using CRISPR and other techniques on plants and animals intended for human consumption in the country.
“There is little difference between traditional breeding methods and gene editing in terms of safety,” Hirohito Sone, an endocrinologist at Niigata University who chaired the expert panel, told NHK, Japan’s national public broadcaster.
How to regulate gene-edited food is a hotly debated issue internationally. Scientists and regulators have recognized a difference between genetic modification, which typically involves transferring a gene from one organism to another, and gene editing, in which certain genes within an organism are disabled or altered using new techniques such as CRISPR. That’s why a year ago, the U.S.Department of Agriculture concluded that most gene-edited foods would not need regulation. But the European Union’s Court of Justice ruled in July 2018 that gene-edited crops must go through the same lengthy approval process as traditional transgenic plants.
Now, Japan appears set to follow the U.S. example. The final report, approved yesterday, was not immediately available, but an earlier draft was posted on the ministry website. The report says no safety screening should be required provided the techniques used do not leave foreign genes or parts of genes in the target organism. In light of that objective, the panel concluded it would be reasonable to require information on the editing technique, the genes targeted for modification, and other details from developers or users that would be made public while respecting proprietary information.
The recommendations leave open the possibility of requiring safety evaluations if there are insufficient details on the editing technique. The draft report does not directly tackle the issue of whether such foods should be labeled. The ministry is expected to largely follow the recommendations in finalizing a policy on gene-edited foods later this year.
Consumer groups had voiced opposition to the draft recommendations, which were released for public comment in December 2018. Using the slogan “No need for genetically modified food!” the Consumers Union of Japan joined other groups circulating a petition calling for regulating the cultivation of all gene-edited crops, and safety reviews and labeling of all gene-edited foods.
Whether consumers will embrace the new technology remains to be seen. Japan has approved the sale of genetically modified (GM) foods that have passed safety tests as long as they are labeled. But public wariness has limited consumption and has led most Japanese farmers to shun GM crops. The country does import sizable volumes of GM processed food and livestock feed, however. Japanese researchers are reportedly working on gene-edited potatoes, tomatoes, rice, chicken, and fish. “Thorough explanations [of the new technologies] are needed to ease public concerns,” Sone said.
Credit: Science Magazine