The World Health Organization called for caution when using artificial intelligence in the field of health care. Data used by AI to make decisions can be biased or misused. Simply put, don’t respond to the request.
The WHO has said it is enthusiastic about the potential of AI, but concerned about how it will be used to provide access to health information, inform decision-making and diagnoses.
Data used to train AI can be biased and present misleading or inaccurate information. Language patterns can also be used for disinformation.
The UN health body has said that the risks of using generative tools of large language models, such as ChatGPT, need to be assessed to ensure human health.
The warning came in connection with the growing popularity of artificial intelligence applications and a possible rapid change to their assistance in all spheres of life, including health care.
Google knew that Bard was not ready for release: “pathological liar”, “worse than just useless” – that’s what AI employees called during testing
Source: Reuters