0% found this document useful (0 votes)
89 views9 pages

AI Ethics in Biomedical Research Review

This literature review had three main objectives: 1. To provide an overview of important AI ethics regulations, recommendations, and frameworks relevant to biomedical research. 2. To identify what AI ethics can learn from traditional biomedical research ethics, particularly from biobanking ethics. 3. To summarize the main research questions regarding AI ethics in biomedical research. The review focused on practical and useful information for those in medical informatics. It excluded general AI ethics and specific legal topics. The methods included searching four databases and identifying 57 relevant documents to include in the final review.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views9 pages

AI Ethics in Biomedical Research Review

This literature review had three main objectives: 1. To provide an overview of important AI ethics regulations, recommendations, and frameworks relevant to biomedical research. 2. To identify what AI ethics can learn from traditional biomedical research ethics, particularly from biobanking ethics. 3. To summarize the main research questions regarding AI ethics in biomedical research. The review focused on practical and useful information for those in medical informatics. It excluded general AI ethics and specific legal topics. The methods included searching four databases and identifying 57 relevant documents to include in the final review.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Article published online: 2022-12-04

152
© 2022 IMIA and Georg Thieme Verlag KG

A Literature Review on Ethics for AI in


Biomedical Research and Biobanking
Michaela Kargl, Markus Plass, Heimo Müller
Medical University Graz, Graz, Austria

Summary research. For this review, four scientific literature databases at the AI ethics, as well as tools and frameworks specifically addressing
Background: Artificial Intelligence (AI) is becoming more and cross-section of medical, technical, and ethics science literature complete and transparent reporting of biomedical studies involv-
more important especially in datacentric fields, such as biomed- were queried: PubMed, BMC Medical Ethics, IEEE Xplore, and ing AI are described in the review results.
ical research and biobanking. However, AI does not only offer Google Scholar. In addition, a grey literature search was conduct- Conclusion: The review results provide a practically useful over-
advantages and promising benefits, but brings about also ethical ed to identify current trends in legislation and standardization. view of research strands as well as regulations, guidelines, and
risks and perils. In recent years, there has been growing interest Results: More than 2,500 potentially relevant publications were tools regarding AI ethics in biomedical research. Furthermore, the
in AI ethics, as reflected by a huge number of (scientific) liter- retrieved through the initial search and 57 documents were review results show the need for an ethical-mindful and balanced
ature dealing with the topic of AI ethics. The main objectives of included in the final review. The review found many documents approach to AI in biomedical research, and specifically reveal
this review are: (1) to provide an overview about important (up- describing high-level principles of AI ethics, and some publica- the need for AI ethics research focused on understanding and
coming) AI ethics regulations and international recommendations tions describing approaches for making AI ethics more actionable resolving practical problems arising from the use of AI in science
as well as available AI ethics tools and frameworks relevant to and bridging the principles-to-practice gap. Also, some ongoing and society.
biomedical research, (2) to identify what AI ethics can learn from regulatory and standardization initiatives related to AI ethics
findings in ethics of traditional biomedical research - in particular were identified. It was found that ethical aspects of AI implemen- Keywords
looking at ethics in the domain of biobanking, and (3) to provide tation in biobanks are often like those in biomedical research, for AI ethics; biomedical research; biobank; literature review
an overview about the main research questions in the field of AI example with regards to handling big data or tackling informed
ethics in biomedical research. consent. The review revealed current ‘hot’ topics in AI ethics Yearb Med Inform 2022:152-60
Methods: We adopted a modified thematic review approach related to biomedical research. Furthermore, several published [Link]
focused on understanding AI ethics aspects relevant to biomedical tools and methods aiming to support practical implementation of

1 Introduction Indeed, there is high interest in AI ethics


within the scientific community, which is re-
1.1 Goals
Artificial Intelligence (AI) is becoming To achieve a practical and useful overview of
flected by a huge body of (scientific) literature
more and more important in all sectors [1, AI ethics in biomedical research, the objec-
dealing with the topic of AI ethics. Recently,
2] and dominates almost all aspects of da- tives of this literature review are threefold:
several literature reviews have been conducted
ta-centric research today. It is envisaged that to frame the topics of AI ethics in general [4, • to provide an overview about important
AI will determine the future in medicine 5], or specifically AI ethics in health care [6-8]. (upcoming) AI ethics regulations and
and biomedical research [3]. However, it is Complementary to these existing litera- international recommendations relevant
widely recognized that AI brings about not ture reviews, the focus of this review paper to biomedical research;
only a range of advantages, opportunities, is on AI ethics in biomedical research and • to identify what AI ethics can learn from
and promises, but also risks and perils. biobanking. However, rather than shedding findings in ethics of traditional biomed-
Thus, in a euphoric “gold digger” mood it light on fundamental questions of ethics in ical research - in particular looking at
is of utmost importance not to forget about this field, the main aim of this review is to ethics in the domain of biobanking as an
the ethical aspects and consequences of AI address the practical needs and questions of inspiring field;
to ensure that AI can “work for the good of the yearbook reader considering that medical • to provide an overview about the main
humanity, individuals, societies and the en- informatics is an applied science discipline research questions in the field of AI ethics
vironment and ecosystems” as formulated at the cross-link of computer science, infor- in biomedical research.
by the UNESCO recently [2]. mation technology, and medicine.

IMIA Yearbook of Medical Informatics 2022


153
A Literature Review on Ethics for AI in Biomedical Research and Biobanking

1.2 What is Out of Scope 2.1 Research Question and was applied. Furthermore, the query results
were restricted to documents in the English
Since AI ethics in the biomedical/medical Scoping Questions language where the full text was available.
domain is a broad topic, it would go be- The core research question of this review For querying BMC Medical Ethics (initial
yond the scope of this review to give an focuses on practically relevant aspects of query conducted on December 9, 2021) there
exhaustive overview of the complete field. AI ethics in the biomedical research field. was no need to apply the search term ‘ethic*’
Therefore, this review concentrates on Furthermore, the authors agreed to consider as there are only papers related to medical
AI ethics in biomedical research. Topics, scoping questions, which should help with ethics included in this database. Thus, for
which are deemed out of scope for this narrowing search terms, articulating queries querying BMC Medical Ethics we applied
review, include: and focusing on relevant points throughout the search terms ‘biomedical’, ‘research’,
• general AI ethics, if the principles do not retrieval and review of the literature. To help ‘artificial’, and ‘intelligence’. For querying
apply directly to biomedical research; guide this review, these scoping questions in- IEEE Xplore (initial query on December
• specific legal topics, as well as national cluded “Which of the existing AI ethics reg- 10, 2021) and Google Scholar (initial query
law and regulations; ulations, guidelines and recommendations on December 15, 2021) we combined the
• machine ethics, that is how agents/algo- are important for biomedical research?”, search terms ‘ethic*’ and ‘biomedic* with
rithms should behave when faced with “Are there any tools and methods available ‘artificial intelligence’, and restricted the
ethical dilemmas; to support practical implementation of AI results to publications since 2017. Finally,
• AI ethics in healthcare when it is about ethics in biomedical research?”, and “Which since this query strategy for Google Scholar
treatment of patients and non-research aspects of the well-established ethics of bio- resulted in a huge amount of results that
topics. banking could potentially inspire AI ethics were not relevant for our research question,
in biomedical research?” we further restricted the query results from
However, although AI ethics with respect Google Scholar by excluding the terms
to treatment of patients in healthcare is ‘healthcare’, ‘robot*’ and ‘social media’. In
beyond the scope of this review, AI ethics
in relation to the acquisition of medical
2.2 Information Sources and addition, for finding relevant grey literature,
a Google search with the search terms ‘stan-
data will be covered in this paper, since Search Strategy dardization’ & ‘AI’ & ‘ethics’ and a Google
secondary use of medical data for research For this review, four scientific literature search with the search terms ‘legislation’ &
purposes is an issue for biomedical re- databases at the cross-section of medical, ‘AI’ & ‘ethics’ were conducted. The first 20
search. Furthermore, although specific technical and ethics science literature were results of each of these Google queries were
legal topics and national law are beyond queried: PubMed, BMC Medical Ethics, used as a starting point for a wide and rapid
the scope of this review, broader regulato- IEEE Xplore, and Google Scholar. In addi- search to find relevant grey literature point-
ry and political initiatives will be covered tion, also a grey literature search was con- ing to international standards and norms as
in this paper. ducted to identify current trends in the leg- well as legislation relevant to AI ethics in
islation and standardization area, which are biomedical research.
not (yet) derivable from scientific literature.
The initial literature sources were retrieved
2 Methods in December 2021, and the research was
iteratively refined and completed through 2.3 Paper Selection and Analysis
Since the aim of this review was to pro- the middle of January 2022. Through the search strategy described
vide useful insights to important AI ethics For the PubMed queries (initial query in the previous section, in total 2,525
regulations, recommendations and current conducted on December 22, 2021) five scientific publications were returned, and
research questions relevant to biomedical search terms typically used in relation to about 20 websites/online articles pointing
research, as well as showing how the al- AI in the research community were used: to potentially relevant aspects of laws and
ready well-established ethics of biobank- (1) ‘artificial intelligence’, (2) ‘AI’, (3) standards were found. For the first stage
ing could inspire AI ethics in biomedical ‘machine learning’, (4) ‘deep learning’, of document selection, these results were
research, the authors adopted a modified and (5) ‘big data’. These were combined screened based on titles, and those docu-
thematic review approach focused on with the term ‘ethic*’, whereby the wild- ments/articles clearly not relevant to the
understanding AI ethics aspects relevant card, that is the asterisk at the end of the research question were discarded. As a
to biomedical research in a pragmatic term ‘ethic*’, allowed for variations, such result of this first selection procedure and
way. The focus of the research was on the as ‘ethics’, ‘ethical’, or ‘ethically’. The after removal of duplicates, 168 documents
identification of a traceable research ques- PubMed search was limited to the title and remained on the shortlist for review on the
tion, the development of a robust search abstract fields of the database. In addition, abstract. In a second step, the full papers of
strategy, data gathering and the synthesis to retrieve only the most current results, a these shortlisted articles were downloaded,
of the key findings. filter for the publication years 2017-2021 and two reviewers screened the documents

IMIA Yearbook of Medical Informatics 2022


154
Kargl et al.

on abstracts against the inclusion and ex-


clusion criteria to identify the potentially
relevant papers. Consensus between the
two reviewers was reached by discussion.
The result of this second stage of document
selection was a final list of 66 articles to be
read in more detail. Resulting from this in-
depth analysis, 57 documents were included
as references for this review paper. The
process of article selection as applied for
this review is depicted in Figure 1.

3 Results
3.1 Legislation, Guidelines, and
Standards
3.1.1 Relevant Regulatory and Political
Initiatives
In April 2021, the European Commission
proposed the Artificial Intelligence Act
[9], which is worldwide the first legal
framework on AI. The proposed regulation
lays down harmonized rules on AI to sup-
port the objective of the European Union
being a global leader in the development
of secure, trustworthy and ethical artificial
intelligence and to ensure the protection of
ethical principles. According to this pro-
posed regulation, ‘high-risk’ AI systems,
which pose significant risk to the health,
safety, or fundamental rights of persons,
will have to comply with mandatory re-
quirements for trustworthy AI and follow
conformity assessment procedures before
these systems can be placed on the Euro-
pean Union market. To ensure safety and
protection of fundamental rights through-
out the whole AI system’s lifecycle, the
Artificial Intelligence Act sets out clear
obligations also for providers and users
of these AI systems [9].
In June 2021, the World Health Or- Fig. 1 Overview of the article selection process applied for this review.
ganization (WHO) published a guidance
document on “Ethics and Governance of
Artificial Intelligence for Health” [3],
and in November 2021, all 193 Member ensure the healthy development of AI [2]. carry out ethical impact assessments on
states of the UN Educational, Scientific Furthermore, this UNESCO agreement AI systems to predict consequences, mit-
and Cultural Organization (UNESCO) encourages governments to set up a regu- igate risks, avoid harmful consequences,
adopted an agreement that defines the latory framework that defines a procedure, facilitate citizen participation and address
common values and principles needed to particularly for public authorities, to societal challenges [2].

IMIA Yearbook of Medical Informatics 2022


155
A Literature Review on Ethics for AI in Biomedical Research and Biobanking

3.1.2 High-level AI Ethics Principles As a result of their comparative anal- AI by providing not only high-level ethics
There is high interest in AI ethics, which is ysis of AI ethics documents, Floridi and principles but also guidance on how these
reflected in many AI ethics guidelines devel- Cowl [11, 13] developed a framework of principles can be implemented in practice.
oped by government, science, industry and five overarching ethics principles for AI, This guidance document identifies three
non-profit organizations in recent years. In composed by the four traditional bioethics high-level ethical principles, namely respect
2019, Jobin et al., identified 84 documents principles beneficence, nonmaleficence, for human autonomy, prevention of harm,
containing principles and guidelines for autonomy, and justice, and one additional and fairness & explicability, which should
ethical AI [4]. Jobin et al., found that most AI-specific principle explicability, which be respected in the development, deploy-
of these guidelines were produced by pri- incorporates both the answers to “how does ment, and use of AI systems [1]. To provide
it work?” and “who is responsible for the guidance on how these principles can be
vate companies (22.6%) and governmental
way it works?”. According to Floridi and implemented, seven key requirements that
agencies (21.4%) in more economically
Cowl, these five principles capture all of the AI systems should meet are listed: human
developed countries. By analyzing these 84
principles found in the analyzed documents agency & oversight, technical robustness
guideline documents for ethical AI, Jobin et
and in addition form also the basis of the & safety, privacy & data governance, trans-
al., revealed eleven overarching ethical val-
“Ethics Guidelines for Trustworthy AI” [1] parency, diversity, non-discrimination &
ues and principles: transparency, justice and
and the “Recommendation of the Council fairness, environmental & societal well-be-
fairness, non-maleficence, responsibility,
of Artificial Intelligence” [14] published by ing, and accountability [1]. These seven
privacy, beneficence, freedom and autonomy,
the European Commission and the OECD key requirements are also included in the
trust, dignity, sustainability, and solidarity
respectively in 2019 [11]. Artificial Intelligence Act proposed by the
[4]. Although none of these eleven ethical
European Commission in 2021 [9].
principles appeared in all of the analyzed To provide concrete practical guidance
guideline documents, the first five principles 3.1.3 Towards Actionable AI Ethics specifically for organizations developing
listed above were mentioned in over 50% of The fundamental AI ethics principles and using AI, Ryan and Stahl [17] retrieved
the guideline documents analyzed by Jobin formulated in many guidelines are rather detailed, practically useful explanations
et al. [4]. In line with the findings of Jobin theoretical concepts and philosophical of the normative implications of common
et al., also Hagendorff, who analyzed and foundations. The complexity, variability, high-level ethical principles.
evaluated 22 major AI ethics guidelines, subjectivity, and lack of standardization, Loi et al., propose a framework of seven
found that especially the aspects of account- including variable interpretation of the actionable principles suitable for practical use:
ability, privacy and fairness appear in about ethical principles are major challenges to (1) Beneficence: do the good (promote
80% of these guidelines [10]. In a systematic practical implementation of these AI eth- individual and community well-be-
literature study, Khan et al., [5] identified 22 ics principles [15]. Khan et al., identified ing and preserve trust in trustworthy
ethical principles relevant for AI, and found fifteen challenging factors for practical agents);
that transparency, privacy, accountability and implementation of ethics in AI, whereby the (2) Non-maleficence: avoid harm (protect
fairness were the four most common ethical lack of ethical knowledge and the vaguely security, privacy, dignity, and sustain-
principles for AI. In a comparative analysis formulated ethical principles were the main ability);
of 6 AI ethics guideline documents issued challenges that hinder the practical imple- (3) Autonomy: promote the capabilities
by high-profile initiatives established in the mentation of ethical principles in AI [5]. of individuals and groups (protect
interest of socially beneficial AI, Floridi In addition, Schiff et al., [16] also found civic and political freedoms, privacy,
and Cowl also found a high degree of over- socio-technical and disciplinary divides and dignity);
lap [11]. However, Floridi and Cowl state as well as functional separations within (4) Justice: be fair, avoid discrimination,
that overlaps between different guidelines organizations as explanations for the princi- promote social justice and solidarity;
must be taken with caution since similar ples-to-practices gap. To tackle these issues (5) Control: knowledgeably control en-
terms often used to mean different things hindering the practical implementation of tities, goals, process, and outcomes
[11], and also Jobin et al., found significant AI ethics, guidance needs to go beyond affecting people;
divergences among the analyzed guidance high-level principles. In the following (6) Transparency: communicate your
documents regarding how ethical principles paragraphs some approaches to make AI knowledge of entities, goals, pro-
are interpreted, why they are deemed import- ethics guidelines suitable for practical use cess, and outcomes, in an adequate
ant, what domain/actors they pertain to, and are described. and effective way, to the relevant
how they should be implemented [4]. Also, The “Ethics Guidelines for Trustworthy stakeholders;
Loi et al., stated that it is difficult to com- AI” [1] developed by the High-Level Expert (7) Accountability: assign moral, legal,
pare the ethical principles in the existing AI Group on Artificial Intelligence (AI HLEG), and organizational responsibilities to
ethics guidelines, as some guidelines cluster which was set up by the European Commis- the individuals who control entities,
values that others keep separated and many sion in June 2018, aim to offer guidance for goals, process, and outcomes affect-
definitions of values are rather vague [12]. fostering and securing ethical and robust ing people [12].

IMIA Yearbook of Medical Informatics 2022


156
Kargl et al.

There are several initiatives underway to cation shall not be discriminatory. This biobanks as institutions; c) Issues concerning
define standards to support practical imple- applies to the training of algorithms; under what conditions researchers can access
mentation of AI ethics - a list of international (10) The target setting, control, and materials in the bank, problems concerning
organizations engaged in AI ethics related monitoring of AI decisions, actions, ownership of biological materials and of
standardization is given in [18]. For example, and communications shall not be intellectual property arising from such mate-
the IEEE Standards Association launched the performed by algorithms [21]. rials; and d) Issues related to the information
IEEE P7000® series of eleven standardiza- collected and stored, e.g., access-rights,
tion projects dedicated to societal and ethical Since lack of effective interdisciplinary disclosure, confidentiality, data security, and
issues associated with AI systems [19]. Also, practices have been identified as issues data protection [24].
the International Organization for Standard- hindering practical implementation of AI In our review on AI related ethics in
ization’s technical committee ISO/IEC JTC ethics [16], Jongsma and Bredenoord [22] biobanking, we intentionally excluded the
1/SC 42 Artificial Intelligence is currently suggest ‘ethics parallel research’, a process field of clinical trials, not data-centered as
working on an overview of ethical and so- where ethicists are closely involved in the well as interventional studies, as all these
cietal concerns of information technology development of new technologies from the rely on special requirements and specific
and artificial intelligence. This forthcoming beginning, as a practical approach for ethical legal frameworks. In this review, we focus on
technical report ISO/IEC DTR 24368 shall guidance of biomedical innovation. population-based biobanks, study-oriented
provide guidance to other ISO/IEC technical The Primary Care Informatics Working biobanks and clinical biobanks aiming for
committees developing standards for domain Group of the International Medical Informat- secondary use of medical data e.g., in the de-
specific applications that use AI [20]. ics Association (IMIA) stated 14 principles velopment or validation of AI algorithms. In
Besides these general efforts for making for ethical use of routinely collected health this case, a biobank itself has already covered
AI ethics principles more suitable for prac- data and AI [23] and formulated the follow- a series of ethical issues during the collection
tical use, there are also some initiatives tar- ing six concrete recommendations: of samples and data and has a clear policy
geted specifically at AI ethics in the medical (1) Ensure consent and formal process to for data reuse in research projects, which
and biomedical domain, as described in the govern access and sharing throughout should cover both non-AI and AI based
following paragraphs. the data life cycle; scenarios. How to implement AI in interna-
Müller et al., [21] formulated the follow- (2) Sustainable data creation & collection tional biobanks covering also ethical legal
ing ten commandments of ethical medical requires trust and permission; and governance requirements is described
AI as practical guidelines for applying AI (3) Pay attention to Extract-Trans- by Kozlakidis [25], and future possibilities
in medicine: form-Load processes as they may of AI in biobanking are described by Lee
(1) It must be recognizable that and have unrecognized risks; [26]. Data sets related to biosamples can be
which part of a decision or action is (4) Integrate data governance and data high-volume, high-velocity and high-variety
taken and carried out by AI; quality management to support clinical information assets, which means we can
(2) It must be recognizable which part of practice in integrated care systems; speak of ‘big-data’ in biobanks. For such
the communication is performed by (5) Recognize the need for new processes ‘big data’ AI methods as machine learning
an AI agent; to address the ethical issues arising and deep learning can be applied to analyze
(3) The responsibility for an AI decision, from AI in primary care; and extract knowledge, for example to train
action, or communicative process (6) Apply an ethical framework mapped automatic decision-making systems [27].
must be taken by a competent physi- to the data life cycle, including an Whenever data from humans are used in
cal or legal person; assessment of data quality to achieve the development of AI-based models, issues
(4) AI decisions, actions, and communi- effective data curation [23]. how data providers and donors are informed
cative processes must be transparent about their involvement arise, which are very
and explainable; similar in biobanking and AI development.
(5) An AI decision must be comprehen- Thus, the usage of a good, informed consent
sible and repeatable; plays a central role in both biobanking and AI
(6) An explanation of an AI decision 3.2 AI-Ethics in Biobanking development. Jurate et al., analyzed consent
must be based on state-of-the-art Biobanks are an important infrastructure in documents in terms of model of the consent,
(scientific) theories; medical research and are a long established scope of future research, access to medical
(7) An AI decision, action, or communi- (research) field traditionally concerned with data, feedback to the participants, consent
cation must not be manipulative by ethical issues. In the overview textbook “The withdrawal, and role of ethics committees
pretending accuracy; ethics of research biobanking” Solbakk et [28]. The transition of biobanks from a sim-
(8) An AI decision, action, or communi- al., group these ethical issues related to ple sample storage service to data banks and
cation must not violate any applicable biobanks into the four clusters: a) Issues con- data curation centers, e.g., for longitudinal
law and must not lead to human harm; cerning how biological materials are entered and population-based biobanks, brings a
(9) An AI decision, action, or communi- into the bank; b) Issues concerning research biobank into the role of a trusted data re-

IMIA Yearbook of Medical Informatics 2022


157
A Literature Review on Ethics for AI in Biomedical Research and Biobanking

pository and fate-keeper for secondary use interpretability, explainability, replicability, noticed that most of the available AI ethics
of medical data. When the data objects are algorithm bias, error risk, and transparency tools and methods lack usability and are
used in the training and validation of high- of data flow), predictive analytics ethics not actionable in practice as they offer
risk AI (this is the case for most medical (addressing e.g., discriminatory decisions only limited documentation and little help
AI solutions), biobank guidelines should and contextually relevant insight), norma- on how to use them, and users would need
follow the recommendations as laid out in tive ethics (addressing e.g., discrimination a high skill-level to apply these tools in
article 10 (data and data governance) of the by generalization of AI conclusions, jus- practice [35].
European Commission’s harmonized rules in tice, fairness, and inequality), relationship
the Artificial Intelligence Act [9]. For ethical ethics (addressing e.g., user interfaces and Checklists, Frameworks, and Processes
AI study design, Chauhan and Gullapalli human-computer interaction, as well as re- In 2020, the High-Level Expert Group on
propose an inclusive AI design and bias-cov- lationships between patients, physicians and Artificial Intelligence, set up by the Europe-
ering sample choice and valuation. They also other healthcare stakeholders) [32]. an Commission, provided “The Assessment
address the controversial concept of race in In addition to research in these areas List for Trustworthy Artificial Intelligence
ethical design [29]. Besides the primary data identified by Saheb et al., [32], researchers (ALTAI)” [36] to support practical imple-
source, they also raise the question of other in the field of AI ethics, such as for example mentation of the seven key ethical require-
stakeholders, for example how a pathologist, Nebeker et al., [18] and Goodman [33], also ments listed in the “Ethics Guidelines for
who worked on generating the annotation, demand for standards to support actionable Trustworthy AI” [1] and referred to in the
should be compensated. In addition to ethics. Furthermore, there is still applied re- “Artificial Intelligence Act” proposed by
clinical reporting guidelines, Baxi et al., search needed to guide developers, users and the European Commission in 2021 [9]. “The
demand for a similar approach to bias risk institutions on the question how to adopt and Assessment List for Trustworthy Artificial
guidelines in data annotation, for example evaluate AI ethics in health informatics and Intelligence” is available as a pdf document
in digital pathology [30]. At an institutional biomedical research. Blasimme and Vayena as well as an interactive online version
level, Gille et al., propose mechanisms for propose to structure this effort according ([Link] containing
future-proof biobank governance, which to adaptivity, flexibility, inclusiveness, additional explanatory notes. It is intended
help to signal trustworthiness to stakeholders reflexivity, responsiveness and monitoring for self-evaluation purposes and shall sup-
and the public. These mechanisms, which (AFIRRM) principles [34]. port stakeholders to assess whether an AI
are proposed for biobank governance, can system that is being developed, deployed,
also be applied to AI ethics in biomedical procured, or used, adheres to the seven key
research institutions [31]. 3.3.2 AI Ethics Tools and Methods ethical requirements [36].
Morley et al., [35] conducted a review Zicari et al., describe Z-Inspection®, a
of publicly available AI ethics tools and process based on applied ethics to assess
methods, which aim to help developers, en- trustworthy AI in practice. They provide a
3.3 Main Research Topics Regarding gineers, and designers of machine learning detailed introduction to the phases of the
AI Ethics in Biomedical Research applications to translate AI ethics principles Z-Inspection® process and accompanying
into practice. The result of their work is a material such as catalogues of questions and
3.3.1 General Research Topics typology listing for each of the five over- checklists [37].
A scoping review of the ethics literature in arching AI ethics principles (beneficence, Nebeker et al., developed a digital-health
the medical field conducted by Murphy et non-maleficence, autonomy, justice, and decision-making framework and an associ-
al., [8] revealed that the main research in explicability) and the tools and methods ated checklist to help researchers and other
that area was related to the common ethical available to apply that ethics principle in concerned stakeholders with selecting and
themes of privacy and security, trust in AI, each stage of machine learning algorithm evaluating digital technologies for use in
accountability & responsibility, and bias [8]. development. In total, 107 tools and methods health research and healthcare [18]. This
In a recent bibliometric analysis Saheb et al., are included in this typology. Morley et al., digital-health decision-making framework
[32] found twelve clusters of research ques- found that the availability of tools is not comprises five domains: (1) participant pri-
tions in AI ethics, from which the following evenly distributed across ethical principles vacy, (2) risks and benefits, (3) access and
cover the research questions on AI ethics in and across the stages of machine learning usability, (4) data management, and (5) eth-
biomedical research raised in the reviewed algorithm development. According to ical principles [18]. Malik et al., provide ten
literature: data ethics (addressing e.g., data the review of Morley et al., the greatest rules for engaging with artificial intelligence
ownership, data sharing and usage, data pri- range of tools and methods is available for in biomedicine, where they mention liabili-
vacy, data bias & skewness, and sensitivity the principle of explicability at the stage ties of computational error, bias harmful to
& specificity), algorithm and model ethics of testing, and these tools are primarily underrepresented groups, as well as privacy
(addressing e.g., machine decision making, ‘statistical’ in nature such as for example and consent challenges especially in genom-
algorithm selection processes, training LIME (Local Interpretable Model-Agnostic ics research as the main ethical implications
and testing of AI models, transparency, Explanations). Furthermore, Morley et al., of AI in biomedicine [38].

IMIA Yearbook of Medical Informatics 2022


158
Kargl et al.

IBM’s research group on Trusted AI system, and the AIMe registry, a communi- subject to peer review and transparent to civil
provides a range of tools to support ethical ty-driven web-based reporting platform for society [54]. In their critical assessment of
principles in AI development [39]. These AI in biomedicine. This reporting platform the movement for ethical AI and machine
tools include for example “AI Fairness should increase accessibility, reproducibility, learning, Green et al., are missing shared
360” - an open source software toolkit to and usability of biomedical AI models, and consensus on the moral responsibility of
detect and remove bias in machine learning facilitate future revisions by the scientific computer engineers and data scientists
models [40]. community [47]. towards their own inventions. They ask to
generate the moral consensus (according to
Tools for Reporting AI Transparently Popper) and warn the community that ethical
IBM’s research group on Trusted AI [39] 3.3.3 The Need for an Ethical-mindful and design possesses some of the same elements
provides “AI FactSheets 360” - a method- Balanced Approach as Value Sensitive Design but lacks their
ology to create complete and transparent Although there are many AI ethics guide- explicit focus on normative ends devoted to
documentation of an AI model/application lines, all these guidelines do not have an social justice or equitable human flourishing
[41]. Also, Mitchell et al., describe a frame- actual impact on human decision-making [55]. Brent Mittelstadt argues that principles
work called “Model Cards”, which supports in the field of AI and machine learning [10]. alone cannot guarantee ethical AI. He states
transparent reporting and documentation of Furthermore, since the AI ethics principles that we should not yet celebrate consensus
trained machine learning models including do not have legally binding grounding, there around high-level principles that hide deep
their performance evaluation and intended is nothing to prevent any company / country political and normative disagreement, as AI
use context [42]. from choosing to adopt a different set of development lacks common aims and fidu-
Complete and transparent reporting ethics principles for the sake of convenience ciary duties, professional history and norms,
of clinical/biomedical study results is an or competitiveness [35]. Jotterand and Bosco proven methods to translate principles into
essential building block of ethical research [48] argue that the ethical framework that practice, and robust legal and professional
since the reporting is the precondition for re- sustains a responsible implementation of accountability mechanisms [56].
liable assessment of the validity of the study such technologies should be reconsidered Whittlestone [57] highlights some of the
results. However, existing reporting guide- and assessed in relation to anthropological limitations of ethics principles. In particular,
lines for clinical/biomedical studies are not implications, how the technology might she criticizes they are often too broad and
sufficient to address potential sources of bias disrupt or enhance the clinical encounter and high-level to guide ethics in practice. She
specific to AI systems, such as for example how this impacts clinical judgments and the suggests that an important next step for the
the procedure for acquiring input data, data care of patients. The humanistic dimension field of AI ethics is to focus on exploring the
preprocessing steps, and model development is in the center of their chain of arguments tensions that inevitably arise as stakeholders
choices [43, 44]. To ensure complete and with the dimensions of empathy, respect and try to implement principles. With the term
transparent reporting of clinical trials of AI emotional intelligence [48]. Buruk et al., ‘tension’ she refers to any conflict, whether
systems, AI-related extensions of guidelines give a critical perspective on guidelines for apparent, contingent or fundamental, be-
for reporting clinical trial protocols and responsible and trustworthy artificial intel- tween important values or goals, where it
completed clinical trials are developed by ligence. They analyzed three main AI ethics appears necessary to give up one in order
key stakeholders in the Enhancing Qual- guidelines, finding an overlap in several to realize the other. To improve the current
ity and Transparency of Health Research principles, such as human control, autonomy, situation, Whittlestone proposes that prin-
(EQUATOR) network program. Shelmerdine transparency, security, utility, and equality, ciples need to be formalized in standards,
et al., [45] give a comprehensive overview of but also a great divergence in the description codes and ultimately regulation and the
reporting guidelines for common study types of future scenarios. What is missing in their research topics in AI ethics to be more
in (bio)medical research involving AI. Luo et view are grounded suggestions for ethical focused on understanding and resolving
al., [46] created a set of guidelines including dilemmas occurring in practical life and a tensions as an important step towards solv-
a comprehensive list of reporting items and a strategy for reflective equilibrium between ing practical problems arising from the use
practical description of the sequential steps ethical principles [49]. Faes et al., [50] raise of AI in society [57].
for developing predictive models, to help the the need for standardization to critically ap-
biomedical research community with correct praise machine learning studies, and promote
application of machine learning models and standards as TRIPOD-ML, SPIRIT-AI, and
consistent reporting of model specifications
and results. To support the potential of AI
CONSORT-AI for reporting covering also
ethical issues [51-53]. Yochai Benklerer calls
4 Conclusions
in biomedical research and help to over- on society to not let industry write the rules The goal of this review was to give a prac-
come the reporting deficit in biomedical for AI and to campaign to bend research and tically useful overview of research strands
AI, Matschinske et al., propose the AIMe regulation for their benefit. He argues that as well as regulations, guidelines, and tools
standard, a generic minimal information organizations working to ensure that AI is regarding AI ethics in biomedical research.
standard for reporting of any biomedical AI fair and beneficial must be publicly funded, To reach this goal, more than 2,500 pub-

IMIA Yearbook of Medical Informatics 2022


159
A Literature Review on Ethics for AI in Biomedical Research and Biobanking

lications were retrieved through queries Table 1 Summary of the key findings from literature review.
of scientific databases and grey literature
search, and 57 of these were analyzed in-
depth. The review revealed that there is Key Findings from Literature Review Further Reading
a large number of publications regarding
AI ethics arrived on the political agenda and have become recognized as important topic also at [1-3,9, 14]
high-level AI ethics principles, but there political level, recently.
are only a few publications dedicated to
helping practitioners with implementation High-level AI ethics principles are stated by many documents developed by government, science, [4,5,10-13]
industry, and non-profit organizations in recent years.
of these high-level principles in practice.
Furthermore, the review found that there Reasons for principles-to-practice-gap are complexity, variability, and subjectivity of the AI ethics [5,15,16]
is a large body of literature regarding AI principles on the one hand, as well as lack of ethical knowledge, socio-technical divides and functional
ethics in healthcare, but comparatively separations within organizations on the other hand.
fewer publications are dealing with AI Efforts to make AI ethics more tangible for practical implementation include practically useful [1,12,17-21,23]
ethics in (bio)medical research. Many of formulations and explanations of AI principles (also specifically for the biomedical domain), develop-
these analyzed publications, which are ment of concrete requirements for AI systems, and standardization initiatives.
specifically dedicated to AI ethics in (bio) Ethics traditions in the field of research-oriented biobanks already covered a series of ethical [24-26,28,31]
medical research, tackle the issue of correct, issues during the collection of samples and data and have a clear policy for data reuse in research
comprehensive, and transparent reporting projects, which should cover both non-AI and AI based scenarios.
of (bio)medical studies involving AI. From Available tools and methods to help practitioners with design, development, and implementation [18,35-40]
the literature, ethics in biobanking is – in of ethical AI, include methods addressing explicability of AI at the stage of model testing, tools to detect
contrast to AI – a long established research bias in AI algorithms, as well as checklists and processes for self-assessment of compliance with AI
field covering informed consent, collection ethics requirements.
of samples, bias in population, as well as Available tools and methods to support ethical AI in (biomedical) research are focusing on [39,41-47,51-53]
all aspects of secondary sample and data transparent and complete reporting of AI-related studies and projects.
(re)-use. Ethical aspects of AI implemen- Current research regarding AI ethics in biomedical research covers a wide range of topics including [8,32]
tation in biobanking are often like those of ethics related to data, algorithms & models, predictive analytics, norms, and relationships.
AI in biomedical research, especially with Future research shall include applied research and standards addressing ethical dilemmas occurring in [18,33,48-50,57]
regards to handling big data or tackling practice as well as anthropological implications of AI usage (in the biomedical domain).
informed consent.
Calls on society include shared consensus on the moral responsibility of computer engineers and data [54-57]
Overall, the review results show the need scientists towards their AI inventions, public funding and transparency of organizations working to
for an ethical mindful and balanced approach ensure that AI is fair and beneficial, as well as robust legal and professional accountability mechanisms,
to AI in biomedical research, specifically all of which are needed to achieve ethical AI.
the need for AI ethics research focused on
understanding and resolving practical prob-
lems arising from the use of AI in science
and society.
care: A mapping review. Soc Sci Med 2020
Acknowledgements References Sep;260:113172.
Parts of this work have received funding 1. AI HLEG, European Commission. Ethics guide- 7. Ienca M, Ferretti A, Hurst S, Puhan M, Lovis C,
from the Austrian Science Fund (FWF), lines for trustworthy AI; 2019. Vayena E. Considerations for ethics review of big
Project: P-32554 (Explainable Artificial 2. UNESCO. Recommendation on the ethics of ar- data health research: A scoping review. PLoS One
Intelligence) and from the Austrian Re- tificial intelligence; 2021. Available from: https:// 2018 Oct 11;13(10):e0204937.
[Link]/ark:/48223/pf0000377897 8. Murphy K, Di Ruggiero E, Upshur R, Willison DJ,
search Promotion Agency (FFG) under 3. WHO. Ethics and governance of artificial intelli- Malhotra N, Cai JC, et al. Artificial intelligence
grant agreement No. 879881 (EMPAIA). gence for health ethics and governance of artificial for good health: a scoping review of the ethics
Parts of this work have received funding intelligence for health 2; 2021. Available from: literature. BMC Med Ethics 2021 Feb 15;22(1):14.
from the European Union’s Horizon 2020 [Link] 9. European Commission. Proposal for a Regulation
4. Jobin A, Ienca M, Vayena E. The global landscape of the European Parliament and of the Council
research and innovation programme under of AI ethics guidelines. Nature Machine Intelli- laying down harmonised rules on artificial intelli-
grant agreement No. 857122 (CYBiobank), gence 2019;1:389–99. gence (Artificial Intelligence Act) and amending
No. 824087 (EOSC-Life), No. 874662 5. Khan AA, Badshah S, Liang P, Khan B, Waseem M, certain Union legislative acts; 2021.
(HEAP), and 826078 (Feature Cloud). Niazi M, et al. Ethics of AI: A systematic literature 10. Hagendorff T. The ethics of AI ethics: An
This publication reflects only the authors’ review of principles and challenges The Interna- evaluation of guidelines. Minds and Machines
tional Conference on Evaluation and Assessment 2020;30:99–120.
view and the European Commission is not in Software Engineering 2022. p. 383-92. 11. Floridi L, Cowls J. A unified framework of five
responsible for any use that may be made 6. Morley J, Machado CCV, Burr C, Cowls J, Joshi principles for AI in society. Harvard Data Science
of the information it contains. I, Taddeo M, et al. The ethics of AI in health Review 2019;1.

IMIA Yearbook of Medical Informatics 2022


160
Kargl et al.

12. Loi M, Heitz C, Christen M. A comparative assess- 28. Jurate S, V. Zivile V, Eugenijus G. ’Mirroring‘ Open 2021 Jun 28;11(6):e047709.
ment and synthesis of twenty ethics codes on AI the ethics of biobanking: What analysis of 45. Shelmerdine SC, Arthurs OJ, Denniston A, Se-
and big data. Institute of Electrical and Electronics consent documents can tell us? Sci Eng Ethics bire NJ. Review of study reporting guidelines
Engineers Inc.; 2020. p. 41–6. 2014;20:1079–93. for clinical studies using artificial intelligence
13. Floridi L, Cowls C, Beltrametti M, Chatila R, 29. Chauhan C, Gullapalli RR. Ethics of AI in Pathol- in healthcare. BMJ Health Care Inform 2021
Chazerand P, Dignum V, et al. Ai4people - an ogy: Current Paradigms and Emerging Issues. Am Aug;28(1):e100385.
ethical framework for a good AI society: Oppor- J Pathol 2021 Oct;191(10):1673-83. 46. Luo W, Phung D, Tran T, Gupta S, Rana S, Kar-
tunities, risks, principles, and recommendations. 30. Baxi V, Edwards R, Montalto M, Saha S. Digital makar C, et al. Guidelines for Developing and
Minds and Machines 2018;28:689–707. pathology and artificial intelligence in translational Reporting Machine Learning Predictive Models in
14. OECD. Recommendation of the council on artifi- medicine and clinical practice. Mod Pathol 2022 Biomedical Research: A Multidisciplinary View. J
cial intelligence. oecd/legal/0449;2019. Available Jan;35(1):23-32. Med Internet Res 2016 Dec 16;18(12):e323.
from: [Link] 31. Gille F, Vayena E, Blasimme A. Future-proofing 47. Matschinske J, Alcaraz N, Benis A, Golebiewski
ments/OECD-LEGAL-0449 biobanks’ governance. Eur J Hum Genet 2020 M, Grimm DG, Heumos L, et al. The AIMe registry
15. Zhou J, Chen F, Berry A, Reed M, Zhang S, Savage Aug;28(8):989-96. for artificial intelligence in biomedical research.
S. A survey on ethical principles of AI and imple- 32. Saheb T, Saheb T, Carpenter DO. Mapping research Nat Methods 2021 Oct;18(10):1128-31.
mentations. Institute of Electrical and Electronics strands of ethics of artificial intelligence in health- 48. Jotterand F, Bosco C. Artificial Intelligence in
Engineers Inc.; 2020. p. 3010–7. care: A bibliometric and content analysis. Comput Medicine: A Sword of Damocles? J Med Syst 2021
16. Schiff D, Rakova B, A. Ayesh A, Fanti A, Biol Med 2021 Aug;135:104660. Dec 11;46(1):9.
Lennon M. Principles to practices for respon- 33. Goodman KW. Ethics in Health Informatics. Yearb 49. Buruk B, Ekmekci PE, Arda B. A critical perspec-
sible AI: Closing the gap. arXiv preprint arX- Med Inform 2020 Aug;29(1):26-31. tive on guidelines for responsible and trustworthy
iv:2006.04707; 2020. 34. Blasimme A, Vayena E. The ethics of AI in bio- artificial intelligence. Med Health Care Philos
17. Ryan M, Stahl BC. Artificial intelligence ethics medical research, patient care and public health. 2020 Sep;23(3):387-99.
guidelines for developers and users: clarifying Patient Care and Public Health (April 9, 2019). 50. Faes L, Liu X, Wagner SK, Fu DJ, Balaskas K,
their content and normative implications. Journal Oxford Handbook of Ethics of Artificial Intelli- Sim DA, et al. A Clinician’s Guide to Artificial
of Information, Communication and Ethics in gence. Forthcoming. Intelligence: How to Critically Appraise Machine
Society 2021;19:61–86. 35. Morley J, Floridi L, Kinsey L, Elhalal A. From Learning Studies. Transl Vis Sci Technol 2020 Feb
18. Nebeker C, Torous J, Bartlett Ellis RJ. Building the what to how: An initial review of publicly avail- 12;9(2):7.
case for actionable ethics in digital health research able AI ethics tools, methods and research to 51. Collins GS, Moons KGM. Reporting of artificial
supported by artificial intelligence. BMC Med translate principles into practices, Sci Eng Ethics intelligence prediction models. Lancet 2019 Apr
2019 Jul 17;17(1):137. 2020;26(4):2141-68. 20;393(10181):1577-9.
19. IEEE, 2021, IEEE 7000™ projects | IEEE ethics in 36. European Commission and Directorate-General 52. CONSORT-AI and SPIRIT-AI Steering Group.
action. Available from: [Link] for Communications Networks, Content and Reporting guidelines for clinical trials evaluating
org/p7000/ Technology. The Assessment List for Trustworthy artificial intelligence interventions are needed. Nat
20. Hjalmarson M. To ethicize or not to ethicize, in Artificial Intelligence (ALTAI) for self assessment. Med 2019 Oct;25(10):1467-8.
ISOfocus_137, 2019. Available from: [Link] EC Publications Office; 2020. 53. Liu X, Faes L, Calvert MJ, Denniston AK; CON-
[Link]/isofocus_137.html 37. Zicari RV, Brodersen J, Brusseau J, Dudder B, Eich- SORT/SPIRIT-AI Extension Group. Extension of
21. Müller H, Mayrhofer MT, Veen EBV, Holzinger horn T, Ivanov T, et al. Z-inspection®: A process the CONSORT and SPIRIT statements. Lancet
A. The ten commandments of ethical medical AI. to assess trustworthy AI. IEEE Trans Technol Soc 2019 Oct 5;394(10205):1225.
Computer 2021;54:119–23. 2021;2:83–97. 54. Benkler Y. Don’t let industry write the rules for
22. Jongsma KR, Bredenoord AL. Ethics parallel 38. Malik A, Patel P, Ehsan L, Guleria S, Hartka T, AI. Nature 2019 May;569(7755):161.
research: an approach for (early) ethical guidance Adewole S, et al. Ten simple rules for engaging 55. Greene D, Hoffmann AL, Stark L. Better, nicer,
of biomedical innovation. BMC Med Ethics 2020 with artificial intelligence in biomedicine. PLoS clearer, fairer: A critical assessment of the move-
Sep 1;21(1):81. Comput Biol 2021 Feb 11;17(2):e1008531. ment for ethical artificial intelligence and machine
23. Liaw ST, Liyanage H, Kuziemsky C, Terry AL, 39. IBM. Trusted AI | IBM research teams; 2021. learning. Proceedings of the Annual Hawaii Inter-
Schreiber R, Jonnagaddala J, et al. Ethical Use Available from: [Link] national Conference on System Sciences; 2019. p.
of Electronic Health Record Data and Artificial trusted-ai#tools 2122–31.
Intelligence: Recommendations of the Primary 40. Bellamy RK, Dey K, Hind M, Hoffman SC, Houde 56. Mittelstadt, B. AI Ethics–Too principled to fail.
Care Informatics Working Group of the Interna- S, Kannan K, et al. AI Fairness 360: An extensible arXiv preprint arXiv:1906.06668. 2019.
tional Medical Informatics Association. Yearb Med toolkit for detecting and mitigating algorithmic 57. Whittlestone J, Nyrup R, Alexandrova A, Cave
Inform 2020 Aug;29(1):51-7. bias. IBM J Res Dev 2019;63(4/5): 4:1 - 4:15. S. The role and limits of principles in AI ethics:
24. Solbakk JH, Holm S, Hofmann B. The Ethics of 41. Richards J, Piorkowski D, Hind M, Houde S, Towards a focus on tensions. In: Proceedings of the
Research Biobanking. Springer US, USA; 2009. Mojsilović A. A methodology for creating AI Fact- 2019 AAAI/ACM Conference on AI, Ethics, and
25. Kozlakidis Z. Biobanks and Biobank-Based Artifi- Sheets. arXiv preprint arXiv:2006.13796;2020. Society. ACM; 2019. p. 195-200. Available from:
cial Intelligence (AI) Implementation Through an 42. Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman [Link]
International Lens. In: Artificial Intelligence and L, Hutchinson B, et al. Model cards for model
Machine Learning for Digital Pathology. Cham: reporting. ACM 2019.
Springer; 2020. p. 195-203. 43. Ibrahim H, Liu X, Rivera SC, Moher D, Chan AW,
26. Lee JE. Artificial intelligence in the future bio- Sydes MR, et al. Reporting guidelines for clinical
banking: Current issues in the biobank and future trials of artificial intelligence interventions: the Correspondence to:
possibilities of artificial intelligence, Biomedical SPIRIT-AI and CONSORT-AI guidelines. Trials Michaela Kargl
Journal of Scientific & Technical Research 2021 Jan 6;22(1):11. Medical University Graz
2018;7(3):5937-9. 44. Sounderajah V, Ashrafian H, Golub RM, Shetty Auenbruggerplatz 2
27. Kinkorová J, Topolčan O. Biobanks in the era of S, De Fauw J, Hooft L, et al; STARD-AI Steering Graz, 8036
big data: objectives, challenges, perspectives, and Committee. Developing a reporting guideline Austria
innovations for predictive, preventive, and person- for artificial intelligence-centred diagnostic test E-mail: [Link]@[Link]
alised medicine. EPMA J 2020;11(3):333–41. accuracy studies: the STARD-AI protocol. BMJ [Link]

IMIA Yearbook of Medical Informatics 2022

Common questions

Powered by AI

Current regulatory and standardization initiatives for AI ethics in biomedical research include international guidelines and recommendations, such as those by UNESCO and the European Commission's 'Ethics Guidelines for Trustworthy AI'. These initiatives emphasize the importance of ensuring AI systems work 'for the good of humanity,' focusing on principles like transparency, accountability, and fairness. The 'Artificial Intelligence Act' proposed by the European Commission and the Assessment List for Trustworthy Artificial Intelligence (ALTAI) are specific examples aiming to standardize ethical assessments of AI systems in practice .

Gaps in current research on AI ethics in biomedical research include practical guidance for implementing ethics tools, addressing real-world ethical dilemmas, and understanding the anthropological implications of AI usage. Further exploration is needed to develop standardized frameworks that provide actionable strategies for resolving ethical challenges in AI applications. Additionally, research should focus on the ethical responsibility of engineers and scientists and the societal impacts of AI, including fairness, transparency, and accountability measures .

Recent studies have identified several 'hot' topics in AI ethics related to biomedical research, including data privacy, algorithmic bias, explainability of AI models, and the ethical implications of machine learning in predictive analytics. These topics present challenges such as ensuring comprehensive informed consent in data usage, preventing discriminatory outcomes from biased algorithms, enhancing the transparency of AI systems' decision-making processes, and managing ethical dilemmas in predictive analytics that affect patient care and public health. Ongoing research is needed to develop actionable solutions that address these ethical challenges .

Current AI ethics tools and frameworks, such as the Assessment List for Trustworthy Artificial Intelligence (ALTAI) and Z-Inspection®, support the practical application of ethics principles by providing structured processes and checklists for self-assessment and compliance with ethical requirements. These tools aid stakeholders in evaluating the adherence of AI systems to key ethical principles such as beneficence, non-maleficence, autonomy, justice, and explicability. They help translate high-level principles into actionable practices, though challenges remain in ensuring the usability and effectiveness of these tools in practice .

AI ethics play a crucial role in governing the secondary use of medical data in biomedical research by setting standards for data management practices, ensuring patient privacy, and obtaining informed consent for secondary data use. Ethical considerations include transparency about how data will be used and ensuring that data usage aligns with previously agreed-upon consent. By adhering to AI ethics principles, researchers can protect the rights of individuals and maintain trust in biomedical research practices .

AI ethics in biomedical research addresses the issue of informed consent by emphasizing transparency and clarity in how AI data is used and managed. Key challenges include ensuring participants are fully informed about the potential uses of their data, especially as AI technologies evolve and data usage scenarios expand. There is also the challenge of maintaining ongoing consent as data may be repurposed for new research. Strategies must be developed to ensure consent processes are dynamic and adaptable to changing ethical and technological landscapes .

The established ethics of biobanking can inspire improvements in AI ethics for biomedical research by offering clear examples of ethical data management and reuse policies. Biobanking ethics provide guidance on informed consent, privacy protection, and transparency in data handling, which can be adapted to address similar ethical challenges in AI systems. By following the robust ethical frameworks developed for biobanking, AI ethics in biomedical research can ensure more explicit and consistent ethical standards in handling data and managing consent processes .

The main ethical considerations for implementing AI in biomedical research and biobanking include the handling of big data, tackling informed consent, addressing the principles-to-practice gap, and ensuring transparency and accountability in AI systems. These considerations align with traditional biomedical ethics in that both focus on ethical data management, consent processes, and maintaining trustworthiness in scientific practices. AI ethics in biomedical research and biobanking also draw from established ethics in biobanks, which include clear policies on data reuse and addressing ethical issues in sample and data collection .

The AFIRRM principles guide the implementation of AI ethics in health informatics by advocating for an adaptive and flexible approach that includes inclusiveness, reflexivity, responsiveness, and monitoring. These principles aim to create a framework that can adjust to the evolving landscape of AI technologies and their ethical implications. They emphasize the need for continuous evaluation and adaptation of AI systems to ensure they align with ethical standards, thus helping developers and institutions implement AI ethically in health informatics settings .

The primary limitations of existing AI ethics tools for applying ethical principles to machine learning development include their lack of usability and actionability. Many tools have limited documentation and offer little practical guidance on how to use them effectively, requiring users to possess a high skill-level. Additionally, the availability of tools is uneven across different ethical principles and stages of machine learning development, with some areas such as explicability receiving more attention than others .

You might also like