A logo for a company called unhouse building your capabilities
AI Singularity in Bioethics
June 17, 2024

Exploring the Ethical Implications and Future Challenges of Advanced AI in Healthcare

4 min reading


Introduction

It became an everyday reality that we just receive news about new advances in AI, including medical AI. From a bioethical point of view, this can both be good and bad news. While it is good to hope for increased accessibility and equity in healthcare delivery attained by the advances in AI, there is always an increase in our ethical concerns of biases and privacy breaches. These pros and cons of AI are just in the first page of any AI bioethics text. Today, however, I would like to touch on a commonly discussed concept in AI, which is the concept of AI Singularity.


The Concept of AI Singularity

AI “Harbingers” say: When AI reaches its “awareness”, another “Big Bang” will happen, and the World as we know it will end to give a space for another world. AI Singularity will be reached when intelligent machines become more intelligent than humans. Techno-pessimists project all the drawbacks of AI Singularity to happen at that point of AI awareness or “hyper maturity”. Instead of us “humans” being in control of AI, the latter would become our controller.


Ethical Concerns at AI Singularity

Should we then still expect that at the time of the imagined AI Singularity, that we will have the same current bioethical issues of breaches in our medical data privacy and security and the biases. Or, we should seek to imagine other concerns in the form of AI control over humanity. Will the intelligent machines that we race to create and improve, will they dominate us and start to mess up with our genes to enslave us. If these machines are that intelligent by then, shall we be of any use for them?!


Moral Standards for Intelligent Machines

Since we still have some time to think about our future with AI, let us use this time while we can. We should not wait until it’s out of our hands as the advocates of AI doomsday try to warn us. Instead of all the above crazy imagination of our future with AI from bioethical perspective, should not we think of alternative scenarios?!


One scenario will be based on the moral standard for our intelligent creatures. The beings that we continue to create and that we expect them to be the inhabitants of the planet after us. What moral standards will they have? Will they agree on some moral standards to be the bases for their relationships? How moral will they be toward us among other cohabitant creatures on the Earth? The only valid answer to all of the above-mentioned questions is that whatever moral standard we are feeding into AI will be reflected in the behaviour of AI. Since we have not yet agreed on any bioethical standards to be the ones that we benchmark our decisions towards them, then we should not expect AI to have any standardized moral norms. One can even argue that taking the word in one of its superficial meanings to say that if we can not agree on a single set of standards for AI bioethics, we should not fear singularity. It looks only logical that without “singulating” AI bioethical standards, AI is not expected to reach its “Singularity”.


Human Intelligence and AI

Another bioethical stance towards the subject is that with the expanding intelligence of the intelligent machines, there is ever expansion of the human intelligence behind their creation. “Human intelligence is not solely based on logical operations and computation, but also includes characteristics such as curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and humour.” Intelligent machines are still far from fulfilling even one aspect of these. The role of the human body as a super tool to implement human intelligence is also a factor that is lacking in the intelligent machines. Many exponential technologies beside AI are developing for our benefit. Genetics and prosthetics are just two examples of technological fields that can support human intelligence rather than compete with it. Exponential technologies can be used for the interest of all of the mentioned aspects of human intelligence to enhance it to the maximum extent possible.


Philosophical Debate on AI Singularity

AI Singularity or “Technology Singularity”, also presents itself as a philosophical debate. Going down this road can only be of benefit to a limited sector of the society. Rather than focusing on the philosophical debate, we need to focus on the applied bioethical aspect of the subject. Philosophy could go deep in discussing aspects of AI singularity that could impact our dignity, our identity or even our survival.


Focus on Applied Bioethics

Our attention should be focused on Applied Bioethics. Applied bioethics of AI takes into consideration the impact of AI Singularity on issues like jobs, social inequality, health and longevity, environmental impact, and security and privacy concerns. and how can the society deal with these growing concerns. We can only put to stop the negative trajectories of AI Singularity, if we can develop AI bioethics to a level that is parallel to the fast development of AI. Our fears of the horrible consequences of AI Singularity can only be exaggerated or at least justified if artificial intelligence keeps exponentially developing while our bioethical thinking is lagging for centuries.

Author

Dr. Yasser Abdullah, a UK-educated ophthalmology specialist, excels in retina, glaucoma, and cataract surgery. As Clinical Informatics Lead and Safety Officer at an NHS Trust, he merges clinical expertise with technological innovation. His research in biomedical ethics, particularly on AI in medicine, is grounded in his Masters from the University of Edinburgh. Abdullah is an active contributor to international and UK-based academic and clinical projects, reflecting a commitment to advancing both ophthalmologic care and ethical medical practice


Bibliography

1- Braga, A. and Logan, R.K., 2019. AI and the singularity: A fallacy or a great opportunity? Information, 10(2), p.73.

2- Upchurch, M., 2018. Robots and AI at work: the prospects for singularity. New Technology, Work and Employment, 33(3), pp.205-218

3- Wang, P., Liu, K. and Dougherty, Q., 2018. Conceptions of artificial intelligence and singularity. Information, 9(4), p.79

A person wearing a yellow vest and red gloves is holding a pair of safety goggles.
February 9, 2025
Applying Japanese safety standards through JICA Shimizu Japan, strengthening capacity, policy alignment, and stakeholder engagement in Uganda and Ghana
Adult Training in Management Setting
February 9, 2025
Strengthening Turkey’s healthcare workforce for global medical tourism through training, competency development, policy alignment, and accreditation
Share by: