Loading Events
This event has passed.

Zoom: https://umac.zoom.us/j/97781274783

Password: 869095



There is now a very extensive debate, both inside and outside academia, about how to make very advanced AI systems safe and ethical. Sometimes this is called the Alignment Problem. There are many proposals for how this can be done. In this talk, I argue that most (or all) of these efforts are both futile and potentially dangerous. I end with some old-fashioned suggestions for how to move forward when thinking about AI risk.



Herman Cappelen is Chair Professor of philosophy at the University of Hong Kong. Before moving to Hong Kong, he worked at the Universities of Oslo, St. Andrews, Oxford, and Vassar College. To name just some of his accolades, he is currently the director of AI & Humanity Lab at the University of Hong Kong, the co-director of Concept Lab at the University of Oslo and of Concept Lab Hong Kong, and holds a position on the Steering Committee of the Institute of Data Science, also at the University of Hong Kong. Formerly, he was the director of Arché Philosophical Research Center in St. Andrews for several years, and held the position of Research Director of the Center for the Study of Mind in Nature at the University of Oslo. His current research focus is on the philosophy of AI, Conceptual Engineering, the conceptual foundations of political discourse, externalism in the philosophy of mind and language, and the interconnections between all of these. However, his philosophical interests are broad – they cover more or less all areas of systematic philosophy.