Join us for an online discussion with Paul Scharre on his latest book: 'Four Battlegrounds: Power in the Age of Artificial Intelligence'.
A new industrial revolution has begun. Like mechanization or electricity before it, artificial intelligence will touch every aspect of our lives—and cause profound disruptions in the balance of global power, especially among the AI superpowers: China, the United States, and Europe. Paul Scharre, a former Army Ranger and Pentagon official, will talk on his book, Four Battlegrounds: Power in the Age of Artificial Intelligence. Dr. Scharre's first book, Army of None, won the 2019 Colby Award, was named one of Bill Gates’ Top 5 Books of 2018, and was named by The Economist one of the top five books to understand modern warfare. Scharre will discuss the U.S.-China AI rivalry and the fierce global competition to lead an AI-driven future. Paul Scharre is the Executive Vice President and Director of Studies at the Center for a New American Security.
If you would like to receive an invitation to this webinar, please sign up through this form. We look forward to you joining us on the 29th January 2024. If you already signed up to the original scheduled event, there is no need to sign up again, you will receive the details in your email.
Paul Scharre is the Executive Vice President and Director of Studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. His first book, Army of None: Autonomous Weapons and the Future of War, won the 2019 Colby Award, was named one of Bill Gates’ top five books of 2018, and was named by The Economist one of the top five books to understand modern warfare. TIME magazine named him in 2023 as one of the “100 most influential people in AI.”
Scharre previously worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. He led the Department of Defense (DoD) working group that drafted DoD Directive 3000.09, establishing the department’s policies on autonomy in weapon systems. He also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance programs and directed energy technologies. Scharre was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and secretary-level planning guidance.
Prior to joining OSD, Scharre served as a special operations reconnaissance team leader in the Army’s 3rd Ranger Battalion and completed multiple tours to Iraq and Afghanistan. He is a graduate of the Army’s Airborne, Ranger, and Sniper Schools and Honor Graduate of the 75th Ranger Regiment’s Ranger Indoctrination Program.
Scharre has published articles in The New York Times, The Wall Street Journal, CNN, TIME, Foreign Policy, Foreign Affairs, Politico, and USA Today, and has appeared on CNN, MSNBC, Fox News, NPR, and the BBC. He has testified before the House and Senate Armed Services Committees and has presented at the United Nations, NATO, the Pentagon, the CIA, and other national security venues. He holds a PhD in war studies from King’s College London and an MA in political economy and public policy and a BS in physics from Washington University in St. Louis.
Markus Anderljung is Head of Policy at the Centre for the Governance of AI. Markus's work aims to identify and improve upon existing AI governance policy recommendations. His research focuses on the potential global diffusion of EU AI policy, regulation of AI, surveys of AI researchers, compute governance, and responsible research norms in AI. He is an Adjunct Fellow at the Center for a New American Security and a member of the OECD AI Policy Observatory's Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist, GovAI’s Deputy Director, and Senior Consultant at EY Sweden. Markus is based in San Francisco.
Markus's recent publications at GovAI include How to Prevent an AI Catastrophe (co-authored with Paul Scharre), Frontier AI Regulation: Managing Emerging Risks to Public Safety, and Protecting Society from AI Misuse: When are Restrictions on Capabilities Warranted?