Artificial intelligence ethics and public confidence
Research in developing countries: follow-up
There have been some interesting developments around the ethics and governance of artificial intelligence (AI) in recent days. First we read that Googles DeepMind has set up anEthics and Society research unit, with the rationale that AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work. . We are committed to deep research into ethical and social questions, the inclusion of many voices, and ongoing critical reflection. The unit has a number ofFellows(independent advisors), including Oxfords Nick Bostrom, to help provide oversight, critical feedback and guidance for our research strategy and work program.
Time limits on maintaining human embryos in research
There are currently no comments to display.
Genome editing and human reproduction: social and ethical issues
Meanwhile, afringe event on AIat the Tory Party conference was attended by Digital Minister Matt Hancock and Damian Collins MP, Chair of the Commons Digital, Culture, Media and Sport Committee. Addressing the meeting, Deputy CEO of techUK Antony Walker referred to the Nuffield Council on Bioethics and suggested a similar institution should be set up to inform and drive public debate around the use of data and AI. This would provide a forum but also an independent and expert body to ensure that academics, businesses and policymakers all have access to the best, impartial information on norms and standards around AI development. The most interesting thing to note about this contribution is that it suggests that industry would welcome some independent advice and, presumably, governance.
Enter your email address and choose your alerts
Sign up for updates on the Councils activities, current projects, recent reports, policy, and events here.
Genome editing and human reproduction
This also still begs the question of how the wider public plays into the issue. When we published our report on biological and health data in 2015, we stressed the need for transparency and public participation, warning that by not taking into account peoples preferences and values, projects that could deliver significant public good may continue to be challenged and fail to secure public confidence. The problems that arose with the care.data initiative, and in the DeepMind collaboration with the Royal Free hospital, illustrated that very clearly. Those cases, and our report, were specifically aboutthe use ofdata in biological research and healthcare, but in developing the technology, the applications and the governance arrangements for AI and wider data use, it is increasingly clear that the public should not be left out of the conversation.
Time limits on maintaining human embryos in research
This discussion is not new, of course. Elon Musk, not known to be afraid of technology development, has beenexpressing his anxieties about AI, and the need for governance and regulation, for quite a while. And the Commons Science and Technology Committeepublished a reporton Robotics and AI exactly twelve months ago. It proposed a standing Commission on Artificial Intelligence be established to examine the social, ethical and legal implications of recent and potential developments in AI. It should focus on establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression.TheGovernment respondedto the Committee in December 2016 by noting thatThe Royal Society is currently examining the implications of Machine Learning, alongside the Royal Society and British Academy work on Data Governance. These projects aim to develop recommendations for data governance arrangements, including ensuring the UK remains a world leader in the use and governance of artificial intelligence.
Time limits on maintaining human embryos in research
Tagged with:Artificial intelligenceGoogle DeepMind.
Posted inBiological and health data.
Research in global health emergencies
Research in global health emergencies
© Nuffield Council on Bioethics 2014-2018
The Royal Society published its report on Machine Learning in April 2017 and, with the British Academy, a report on Data Management and Use in June 2017. Discussion about where to go with these issues is still a lively debate, involving multiple contributors from all sectors. I should just add that the use of person-related data and AI are different questions with different sets of issues and concerns, but it will be difficult to keep them apart.
This site uses Akismet to reduce spam.Learn how your comment data is processed.
Exploring ethical issues in biology and medicine
Genome editing and human reproduction
Posted onOctober 6th 2017byHugh Whittall
Research in developing countries: follow-up
Research in developing countries: follow-up
So what of the recent moves at DeepMind and the techUK proposal? If the latter is seriously interested in having the benefit of an independent advisory body, DeepMinds unit is not going to provide it. It will no doubt be an important source of knowledge and information as a research unit, but being internal to DeepMind (itself part of Google) it will not have the independence that will be essential for building public confidence. It is flattering that techUK should look to the Nuffield Council on Bioethics as a model institution. Our remit, of course, is limited tobio-related developments and whilst AI will have applications inbio-fields (on which we will maintain a watching brief), the scope of AI and data use goes much wider than that. Im sure techUK will also be watching out for the outcomes of the work that RS and BA are pursuing, along with partners including the Nuffield Foundation (one of our funders). That is whence the ethical and expert advisory structures are more likely to appear, in my view. Government would no doubt welcome this, with others doing the heavy lifting, but a further question is whether the Government will also be looking to bring in some more hard-edged regulatory systems for data and AI. The Conservative Party Manifesto promised a framework for data ethics, saying that we will institute an expert Data Use and Ethics Commission to advise regulators and parliament on the nature of data use and how best to prevent its abuse. Whether this remains the intention of Government might yet be an open question, given its other current preoccupations and the number of manifesto pledges that have already been sidelined.
Your email address will not be published.Required fields are marked*