Speaker feature
Embracing the unknown: Dr Mike Schäfer chats about AI and science communication
The implications for public engagement with science, according to Dr Mike S. Schäfer, could be profound. A Professor of Science Communication at the University of Zürich (Switzerland), Schäfer referred to the growing use of generative artificial intelligence (AI) in science communication.
Schäfer attended the Communicating Discovery Science Symposium in Stellenbosch, South Africa, this November. The event, organised by the Stellenbosch University’s Centre for Research on Evaluation, Science and Technology (CREST) with support from The Kavli Foundation, brought together scientists and science communication experts to discuss the nuances of and the research behind communicating basic science.
An expert on how science is communicated to the public, Schäfer’s research has increasingly focused on AI. During a break from the symposium, he reflected on the challenges and possibilities of generative AI in science communication.
A Growing Tool in Science Communication
The release of ChatGPT in November 2022 marked a turning point for AI in science communication, Schäfer noted. Prior to its debut, AI’s role in the field was limited, but its growing popularity has sparked a surge in both research and usage.
First studies show that after science communicators used AI tools initially mostly for straightforward tasks like translation and text generation, this use has now rapidly evolved and diversified. By mid-2023, they were being used considerably more for automatic transcriptions, idea generation, visual content creation, and even tailoring messages for specific audiences, Schäfer explains.
A particularly interesting example is the use of AI in science journalism. Projects with media houses like the Washington Post are investigating how AI can improve science journalism. One example Schäfer highlights is the use of AI with arXiv, an open-access archive of scholarly articles. AI can help identify relevant scientific articles from the repository and suggest potential news angles or outlets for coverage.
AI can also use detailed personas from audience studies, helping journalists tailor their stories to different groups. This allows content to be crafted for teenagers or individuals with limited interest in science. Schäfer notes that while these developments are still in the early stages, they demonstrate AI’s diverse applications in science communication. “We have few robust studies on this yet, but they all point to increased use,” he says
Four Key Research Directions
AI as a field of scientific development is strikingly under-researched, and articles on its role in science communication remain relatively scarce. Schäfer identifies four key research directions that could and should be pursued in this area.
The first – and most commonly studied to date – is the analysis of public communication about AI, similar to how we examine communication around biotechnology or nuclear science. Key questions include who is strategically communicating about generative AI, how journalists perceive it, how it is presented in the media, and what the public learns from these sources.
The second area of study involves examining how AI tools, like ChatGPT, present science. In one study, Schäfer explains, his team reverse-engineered ChatGPT’s responses to understand its perspective on science. The findings were revealing: it overwhelmingly favours the natural sciences and adopts a positivist stance, emphasising experimentation and quantitative analysis over qualitative or ethnographic approaches. While ChatGPT’s responses are generally accurate on the ‘big questions’ like climate change, vaccinations, homoeopathy, and astrology, it struggles with emerging research due to limited training data.
A third research direction focuses on the impact of generative AI on science communication and its foundations. We must investigate how people perceive and interact with AI, Schäfer says. A particularly revealing study in this field showed that when participants didn’t know a response was AI-generated, they rated it as more empathetic and higher in quality than human-generated responses. However, if people are informed that responses came from AI, perceptions differ, often favouring human-generated content or—even identical—AI-generated one.
The implications for public engagement with science could be immense, he adds. For example, in the health field, AI could be tested as a replacement for certain doctor-patient interactions. While human-to-human engagement remains ideal, it is resource-intensive and difficult to extend to underrepresented or rural communities, but AI could help bridge that gap.
Similarly, AI offers a unique opportunity to democratise access to science communication. Imagine a world, Schäfer says, where everyone has access to an AI capable of engaging in meaningful conversations about science. This could significantly broaden the reach of public engagement initiatives.
The final area of research is conceptual and theoretical work, as interactions with AI do not fit neatly into existing communication theory paradigms that focus on human-human communication.
Several research projects are underway, Schäfer says, but a major project of his will focus on how various stakeholders – such as scientists, journalists, university communicators, influencers, and citizens – interact with AI in the context of science communication. Funded by the Swiss National Science Foundation, the four-year project will include a foresight panel with experts in science, technology, ethics, and regulation to provide insights into future trends and challenges.
Challenges and Ethical Considerations
Several challenges surround the use of generative AI in science communication. Schäfer highlights a major concern: the lack of clear regulations and guidelines on responsible AI use. While many communication teams are experimenting with AI, few have the training or resources to use it effectively, he says. For example, crafting effective prompts, fact-checking outputs, and navigating the biases inherent in AI models require a new kind of literacy.
Additionally, the ethical implications of gatekeeping in AI are significant. The data and algorithms behind large language models are shaped by human decisions about what or which perspectives to include or exclude, Schäfer explains. This has sparked debates, especially in the US, where some argue that removing certain content enforces specific values. The emergence of alternative models, including those developed in China, further complicates the issue.
However, Schäfer remains optimistic about the future. While the challenges—corporate control, societal scepticism, and others— can seem overwhelming, these are not new. Science communication has always faced barriers to reaching diverse audiences.
“What excites me is the potential of AI to scale-up solutions,” he says.
For example, tools like ChatGPT could help engage populations that traditional methods struggle to reach. If implemented thoughtfully, these technologies could foster a deeper, more inclusive public understanding of science.
The future of AI in science communication is still unfolding, and there is much to learn. But with the right research, ethical considerations, and innovative approaches, Schäfer believes we can harness these tools to create a more informed and connected world.
More information
For more information, read Schäfer’ essay The Notorious GPT: science communication in the age of artificial intelligence, published in The Journal of Science Communication (JCOM) in May 2023
Dr Mike S. Schäfer, professor of science communication at the University of Zürich.