What is the best way to determine potential uses of LLM chatbots?
Gilbert: This technology offers phenomenal potential in almost all medical fields but also enormous risk. In its initial, uncontrolled state, it is useless. Like taming a wild mustang, you want to achieve adaptation without blocking all the potential.
There needs to be a public debate about whether these technologies are acceptably safe. This should not be an "anything goes" debate, nor should it be a "too big to regulate" debate. The result should be fair rules that apply to the whole sector. And it should not be the case that certain companies are simply too big to regulate.
Are there differences in the regulations in different countries?
Gilbert: Almost all countries are members of the World Health Organisation and follow its guidelines. Therefore, certain criteria must apply to medical devices. For large medical language models, manufacturing companies have to prove that they have control over what is used as a source.
The idea that doctors won’t use this medical technology is unrealistic. Already, the political realities and future approaches in Europe and the US are not the same. In Australia, medical associations and hospital groups have called for doctors not to use ChatGPT during consultations. The question is how to control this. Because the liability situation has changed as a result of this demand.