[HASTS-jobs] Fwd: postdoctoral positions at FHI, University of Oxford
Karen L Gardner
kgardner at mit.edu
Wed Dec 9 11:39:12 EST 2015
Begin forwarded message:
From: Andrew Snyder-Beattie <andrew.snyder-beattie at philosophy.ox.ac.uk<mailto:andrew.snyder-beattie at philosophy.ox.ac.uk>>
Date: December 9, 2015 at 11:07:23 AM EST
To: undisclosed-recipients:;
Subject: postdoctoral positions at FHI, University of Oxford
Dear Sir/Madam,
My institute is inviting applications to four postdoctoral positions, which may be of interest to your programme. I wonder if you might be willing to circulate the information below? Thank you for your time!
All the best,
Andrew
--
The Future of Humanity Institute at the University of Oxford invites applications for four postdoctoral research positions. We seek outstanding applicants with backgrounds that could include science and technology studies.
The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.
Details of each of the positions are below. To see all FHI vacancies, please see http://www.fhi.ox.ac.uk/vacancies/.
1. Research Fellow – AI Policy – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121241). We are looking for someone with expertise relevant to assessing the socio-economic and strategic impacts of future technologies, identifying key issues and potential risks, and rigorously analysing policy options for responding to these challenges. This person might have an economics, political science, social science, or risk analysis background. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1OfWd7Q.
2. Research Fellow – AI Strategy – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121168). We are looking for someone with a multidisciplinary science, technology, or philosophy background and with outstanding analytical ability. The post holder will investigate, understand, and analyse the capabilities and plausibility of theoretically feasible but not yet fully developed technologies that could impact AI development, and to relate such analysis to broader strategic and systemic issues. The academic background of the post-holder is unspecified, but could involve, for example, computer science or economics. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1jM5Pic.
3. Research Fellow – ERC UnPrEDICT Programme, Future of Humanity Institute (Vacancy ID# 121313). This Research Fellowship will work on a new European Research Council-funded UnPrEDICT (Uncertainty and Precaution: Ethical Decisions Involving Catastrophic Threats) programme, hosted by the Future of Humanity Institute at the University of Oxford. This is a research position for a strong generalist, and will focus on topics related to existential risk, model uncertainty, the precautionary principle, and other principles for handling technological progress. In particular, this research fellow will help to develop decision procedures for navigating empirical uncertainties related to existential risk, including information hazards and situations where model or structural uncertainty are the dominating form of uncertainty. The research could take a decision-theoretic approach, although this is not strictly necessary. We also expect the candidate to engage with the research on specific existential risks, possibly including developing a framework to evaluate uncertain risks in the context of nuclear weapons, climate risks, dual use biotechnology, and/or the development of future artificial intelligence. The successful candidate must demonstrate evidence of, or the potential for producing, outstanding research in the areas of relevance to the project, the ability to integrate interdisciplinary research in philosophy, mathematics and/or economics, and familiarity with both normative and empirical issues surrounding existential risk. Applications are due by Noon 6 January 2016. You can apply for this position through the Oxford recruitment website at http://bit.ly/1HSCKgP.
Alternatively, please visit http://www.fhi.ox.ac.uk/vacancies/ or https://www.recruit.ox.ac.uk/ and search using the above vacancy IDs for more details.
--
Andrew Snyder-Beattie
Director of Research
Future of Humanity Institute
Oxford Martin School & Faculty of Philosophy
Phone: +44(0)1865 610997
Website: http://www.fhi.ox.ac.uk<http://www.fhi.ox.ac.uk/>
We are recruiting: http://www.fhi.ox.ac.uk/vacancies/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/hasts-jobs/attachments/20151209/a957bc65/attachment-0001.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Oxford FHI Jobs Dec2015.pdf
Type: application/pdf
Size: 276973 bytes
Desc: Oxford FHI Jobs Dec2015.pdf
Url : http://mailman.mit.edu/pipermail/hasts-jobs/attachments/20151209/a957bc65/attachment-0001.pdf
More information about the HASTS-jobs
mailing list