(Keybridge Communications) – Could terrorists or other bad actors use artificial intelligence to create a deadly pandemic?Scientists at Harvard and the Massachusetts Institute of Technology conducted an experiment to find out last year. Researchers asked a group of students, none of whom had specialized training in the life sciences, to use AI tools, such as OpenAI’s ChatGPT-4, to develop a plan for how to start a pandemic. In just an hour, participants learned how to procure and synthesize deadly pathogens like smallpox in ways that evade existing biosecurity systems.Related Stories
AI cannot yet manufacture a national security crisis. But as biotechnology becomes more advanced, policymakers are understandably worried that it’ll be increasingly easy to create a bioweapon. So they’re starting to take action to regulate the emerging AI industry.Their efforts are well-intentioned. But it’s critical that policymakers avoid focusing too narrowly on catastrophic risk and inadvertently hamstring the creation of positive AI tools that we need to tackle future crises. We should aim to strike a balance.AI tools have enormous positive potential. For instance, AI technologies like AlphaFold and RFdiffusion have already made large strides in designing novel proteins that could be used for medical purposes.& The same sort of...
0 Comments