In the previous post on ChatGPT, we explored a few potential applications of ChatGPT in the healthcare industry. Despite its strong capabilities, ChatGPT has its own limitations and challenges. In this post, we shall discuss some of these, especially those that are relevant in the healthcare setting.
The first concern of ChatGPT for any healthcare professional is access to unbiased and trustworthy information. The application is not designed to analyze the validity of the sources of information it draws from. Unlike human communications, which can acknowledge ambiguity, ChatGPT may provide incorrect, biased, or inappropriate answers without indicating any potential inaccuracy. If the healthcare providers using ChatGPT are unaware of this limitation, use of the program can pose a significant risk to patient well-being. On the other hand, if ChatGPT is programed to be more cautious in its responses, it may end up declining questions that it can answer correctly, diminishing its usefulness. It is important to point out that the data used to train ChatGPT only extends as far as 2021. So, if you want ChatGPT to provide answers including new guidelines or new medications that are published or discovered after 2021, you would be out of luck for now.
One related concern is the lack of diversity and representativity of the outputs. According to a 2019 article published in the journal JAMA Oncology, only 3% and 6% of the participants in 230 clinicals trials supporting FDA oncology drug approvals between 2008 to 2018 were Black or Hispanic, respectively. Compared with US cancer incidence, both the Black population (22% of expected) and Hispanic population (44% of expected) were significantly underrepresented. Therefore, if the data platforms used by ChatGPT are based on PubMed or the FDA database, etc., there is a risk that ChatGPT will fail to address the needs of specific populations or further marginalize such groups. The question of representativeness in term of variation of the health guidelines from country to country as well as the local availability of various therapeutic options must also be taken into consideration when relying on the outputs of ChatGPT. However, this limitation is not unique to ChatGPT and is influenced by the information sources used by ChatGPT.
The next concern regarding ChatGPT is the lack of transparency of its sources or algorithms. Like many other AI technologies, ChatGPT is a “black box,” so we do not know precisely how the responses are provided. Additionally, there is a certain degree of unpredictability in its output since it uses a probabilistic approach. Consequently, the responses of ChatGPT can be inconsistent if the request is phrased slightly differently or the same request is made multiple times. Improving the transparency of its data sources should be relatively easy to achieve, but the lack of transparency of its algorithms may be more challenging to resolve.
As mentioned in previous posts, ChatGPT is a powerful and groundbreaking AI technology. It is designed to provide a human-like conversational experience. Its potential applications in healthcare would allow healthcare providers to focus on higher-order activities. However, the technology also has its limitations and concerns. Therefore, it is important to keep in mind that it is a tool, not a replacement for human reasoning.
Have you test driven ChatGPT yet? Please share your experiences and thoughts with us!