- Cet évènement est passé
Panel: Artificial Intelligence in Medicine: Impacts on Knowledge and Practices (SPT2023)
7 juin - 10 juin
Panel organized during the 23rd biennal international conference of the Society for Philosophy and Technology hosted in Tokyo, Japan, from June 7-10, 2023.
Artificial intelligence (AI) opens up new and promising perspectives for making precision medicine a reality, bringing disease diagnosis, prognosis, follow-up and treatment to better accuracy, individualization and cost/time effectiveness. Most advances come from the use of machine learning (ML) algorithms, a popular subset of AI technologies. The use of machine learning-based devices in clinical settings raises ethical and regulatory issues that have been thoroughly addressed in the literature (including patient informed consent, safe, transparent and explainable algorithms, algorithm fairness and bias as well as data privacy, patient rights to information and explanation, cybersecurity, etc.). Until now, however, little academic attention has been paid to an additional issue that deserves particular scrutiny: the potential impacts of AI in medicine. Very few publications highlight the potential consequences of AI on medical professions and practices, such as the deskilling of physicians or the focus on text (i.e., data) and the demise of context. To which extent have ML algorithms the potential for dramatically reconfiguring medical knowledge frameworks (in particular, disease classifications) and practices (in particular, diagnosis, prognosis, therapeutic choices and follow-up)? What will be the impact of such evolution on practitioners (tasks they will focus on, and perform; new skills they will have to acquire)? The purpose of this panel is to gather scholars who tackle these issues on the basis of empirical studies.
Océane Fiant, (Costech, Université de technologie de Compiègne) : “Artificial Intelligence in Pathology: What Kind of Decision Support?”
Two arguments are frequently put forward to justify the deployment of artificial intelligence (AI) in medicine: it is either to relieve the physician of repetitive tasks with low added value, or to provide him or her with simple decision support. I will present a case illustrating the second perspective. It is a project aiming at building a dataset of breast cancer images, which will later be used to train artificial neural networks to detect tumor components on hematoxylin and eosin-stained whole slide images. The systematic inventory of these components should allow the pathologist to “see” things that he or she cannot detect with the naked eye, thereby improving his or her ability to analyze breast tumors, diagnose them and manage patients. However, the study of this case reveals a challenge other than that of providing the pathologist with simple decision support. For some years now, the management of patients according to the characteristics of their tumors has been based on molecular assays that correlate genetic variants with pathological phenotypes. These assays are used in certain clinical cases to choose some therapeutic options over others. While it is possible to argue that these assays do not compete with, but merely complement the pathologist’s expertise, the fact remains that they can guide clinical decisions according to knowledge and criteria that are not part of the epistemic equipment of this practitioner. Thus, by enhancing the latter’s ability to analyze tumors, AI tools are also part of a professional strategy to reinforce the pathologist’s expertise, faced with genomics-based approaches. My presentation aims at examining the design process of this dataset, by comparing its objectives and its implementation to those of available gene expression assays (mainly OncotypeDX).
Emanuele Clarizio (ETHICS, Université catholique de Lille): “Machine Learning as a Biomedical Platform: a philosophy of technology perspective”
In the first part of my talk, I will try to make a summary of the various ways in which philosophy of technology can understand the functioning of machine learning (ML) in oncology. In the second part, I will focus on a specific hypothesis, which seems to me particularly fruitful: that of considering ML as a biomedical platform. Classically, philosophy of technology may consider ML applied to medicine in the following ways: – as a simple tool available to the physician, an extra piece in his toolbox that can be used or not, according to the convenience of each physician; (neutrality of technology) – as an instrument, a sort of sensitive and intelligent prosthesis capable of sharpening the physician’s gaze and thus perfecting his ability to observe and diagnose; (instrumentality of technology) – as a process of automation in which the increase of the calculation capacity well beyond human limits produces, through quantitative accumulation, qualitative effects: ML would thus create a new horizon for medicine, in which physicians would become obsolete because technology would replace their role; (power of technology) – as an active means that modifies the relationship between the subject and the object, introducing new modes of behavior, producing new practices and raising new problems (ethical, legal, organizational, etc.) (agency of technology). While the first two approaches respond to an instrumental vision of technology as a means to serve man and enhance his faculties, the second two aim to identify in technology some form of normativity other than human normativity. What all these approaches have in common is that they observe technology from the point of view of its interaction with humans. Recently, however, another way of understanding technology has emerged with the development of Science and Technology Studies (STS). STS have shown how the normativity of technology is not only located at the level of relations between individuals (whether it be humans or objects), but acts, more radically, by modifying and reconfiguring the context and thus allowing new objects, practices, knowledge and professional figures to emerge. In medicine, this process has been described in an exemplary way by Alberto Cambrosio and Peter Keating in terms of the emergence of biomedicine. Biomedicine is a process through which new objects, techniques, practices and actors come into existence. According to Keating and Cambrosio, the central phenomenon of biomedicine is the emergence of biomedical platforms, which are techno-social entities that reconfigure the practices of medicine. My hypothesis is that ML can be seen as a new biomedical platform, to the extent that it brings out new techniques in oncology, to which new professionals are linked, and changes the way of understanding cancer.
Gérald Gaglio (GREDEG, Université Côte d’Azur, France) and Alain Loute (Faculté de médecine/Faculté de santé publique/Institut de recherche santé et société, Université catholique de Louvain, Belgium): “Ethical Experimentation « in » and « with » AI in radiology”
Using empirical research in the field of artificial intelligence (AI) in radiology, this presentation seeks to identify how AI devices undergo experiments, both in the sense of experiments “in” and experiments “with” technology (Kroes 2018). If the former refers to experiments conducted in the context of technological artifacts design, development and production, the latter “are experiments in which technologies are implemented in real-life situations in order to achieve some practical goals during which the implementation is closely monitored for learning purposes” (Kroes 2018). We will discuss four cases of deployment of AI technologies for automatic detection in radiology, in the areas of trauma and senology. In these four cases, the technologies are “enacted” (Orlikowski & Scott, 2016), tamed and partially modified for the needs of professionals, in a partial redesign process. They are not passively received or adopted out of hand. They are tried, tested: professionals are literally experimenting “in technology”. We will then highlight the ethical issues that emerge from these experimentations, from the point of view of the professionals. These issues include, among other things, the teaching to be given to radiology interns (Anachini & Geffroy 2021) (are they likely to rely on the automatic detection device before giving their own opinion?); the potential benefit to patients in the context of the trust to be built up between the radiologists and the device (when is it “wrong”? What can it “see better” than the human eye? etc.); and the cooperation, which is questioned, between the emergency department (where the vast majority of radiological examinations come from) and the radiology department. Some professionals also question the “ethical” character of those experiments, regretting that some professionals were not involved. Others point out the “implicit” or “tacit” (Ibo van de Poel 2018) character of some experiments. They reveal how professionals who are involved in an experiment “in technology” can sometimes, without explanation, be part of an experiment “with technology” (Kroes 2018). This kind of experimentation aims at implementing a new organization of work production. This presentation of our fieldwork and what is taking place there will enable us to raise the issue of an ethics of experimentation “in” and “with” AI technologies.