Correspondence to: Dr Nicole Martinez-Martin, Center for Biomedical Ethics, School of Medicine, Stanford University, Stanford, CA 94305, USA
Affiliations
Center for Biomedical Ethics, Stanford University, Stanford, CA, USA
Search for articles by this authorZelun Luo
Affiliations
Department of Computer Science, Stanford University, Stanford, CA, USA
Search for articles by this authorAmit Kaushal
Affiliations
Department of Bioengineering, Stanford University, Stanford, CA, USA
Search for articles by this authorEhsan Adeli
Affiliations
Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USADepartment of Computer Science, Stanford University, Stanford, CA, USA
Search for articles by this authorAlbert Haque
Affiliations
Department of Computer Science, Stanford University, Stanford, CA, USA
Search for articles by this authorSara S Kelly
Affiliations
Clinical Excellence Research Center, Department of Medicine, Stanford University, Stanford, CA, USA
Search for articles by this authorSarah Wieten
Affiliations
Center for Biomedical Ethics, Stanford University, Stanford, CA, USA
Search for articles by this authorMildred K Cho
Affiliations
Center for Biomedical Ethics, Stanford University, Stanford, CA, USA
Search for articles by this authorDavid Magnus
Affiliations
Center for Biomedical Ethics, Stanford University, Stanford, CA, USA
Search for articles by this authorLi Fei-Fei
Affiliations
Stanford Institute for Human-Centered Artificial Intelligence, Stanford University, Stanford, CA, USA
Search for articles by this authorKevin Schulman
Affiliations
Clinical Excellence Research Center, Department of Medicine, Stanford University, Stanford, CA, USA
Search for articles by this authorArnold Milstein
Affiliations
Clinical Excellence Research Center, Department of Medicine, Stanford University, Stanford, CA, USA
Search for articles by this authorOpen AccessPublished:December 21, 2020DOI://doi.org/10.1016/S2589-7500(20)30275-2
Ethical issues in using ambient intelligence in health-care settings
Previous ArticleRenin–angiotensin system blockers and susceptibility to COVID-19: an international, open science, cohort analysis
Next ArticleDigital health during COVID-19: lessons from operationalising new models of care in ophthalmology
Ambient intelligence is increasingly finding applications in health-care settings, such as helping to ensure clinician and patient safety by monitoring staff compliance with clinical best practices or relieving staff of burdensome documentation tasks. Ambient intelligence involves using contactless sensors and contact-based wearable devices embedded in health-care settings to collect data (eg, imaging data of physical spaces, audio data, or body temperature), coupled with machine learning algorithms to efficiently and effectively interpret these data. Despite the promise of ambient intelligence to improve quality of care, the continuous collection of large amounts of sensor data in health-care settings presents ethical challenges, particularly in terms of privacy, data management, bias and fairness, and informed consent. Navigating these ethical issues is crucial not only for the success of individual uses, but for acceptance of the field as a whole.Summary
Introduction
Concurrent advances in multi-modal sensing technology, machine learning, and computer vision have enabled the development of ambient intelligence—the ability to continuously and unobtrusively monitor and understand actions in physical environments. Ambient intelligence is increasingly finding use in health-care settings.
- Haque A
- Milstein A
- Fei-Fei L
Illuminating the dark spaces of healthcare with ambient intelligence.
Nature. 2020; 585: 193-202
- Crossref
- PubMed
- Scopus (74)
- Google Scholar
- Topol EJ
High-performance medicine: the convergence of human and artificial intelligence.
Nat Med. 2019; 25: 44-56
- Crossref
- PubMed
- Scopus (1644)
- Google Scholar
FigureSensor data collection for ambient intelligence in health-care settings
RGB=red, green, blue analogue colour video signal.
- View Large Image
- Download (PPT)
Ambient sensors are placed in hospital settings (eg, intensive care units [ICUs] and operating rooms, to monitor the activities of clinicians, staff, and patients, as well as in daily living spaces like independent living or community care settings, to gather data relevant to managing care for older people, chronic disease management, or mental health problems. In the hospital setting, ambient intelligence has been used to ensure the safety of clinicians and patients by monitoring the skill of a surgeon or adherence to hand hygiene protocols in the ICU.
- Haque A
- Guo M
- Alahi A
- et al.
Towards vision-based smart hospitals: a system for tracking and monitoring hand hygiene compliance.
arXiv. 2018; (published online April 24.) (preprint)
//arxiv.org/abs/1708.00163
- Google Scholar
- Yeung S
- Downing NL
- Fei-Fei L
- Milstein A
Bedside computer vision—moving artificial intelligence from driver assistance to patient safety.
N Engl J Med. 2018; 378: 1271-1273
- Crossref
- PubMed
- Scopus (71)
- Google Scholar
- Chen J
- Cremer JF
- Zarei K
- Segre AM
- Polgreen PM
Using computer vision and depth sensing to measure healthcare worker-patient contacts and personal protective equipment adherence within hospital rooms.
Open Forum Infect Dis. 2015; 3ofv200
- Crossref
- PubMed
- Scopus (7)
- Google Scholar
- Yeung S
- Rinaldo F
- Jopling J
- et al.
A computer vision system for deep learning-based detection of patient mobilization activities in the ICU.
NPJ Digit Med. 2019; 2: 1-5
- Crossref
- PubMed
- Scopus (53)
- Google Scholar
- Colaner S
Stanford researchers propose AI in-home system that can monitor for coronavirus symptoms.
//venturebeat.com/2020/04/06/stanford-researchers-propose-ai-in-home-system-that-can-monitor-for-coronavirus-symptoms/
Date: April 6, 2020
Date accessed: May 3, 2020
- Google Scholar
- Roux M
How facial recognition is used in healthcare.
//sightcorp.com/blog/how-facial-recognition-is-used-in-healthcare/
Date: March 23, 2019
Date accessed: July 7, 2020
- Google Scholar
- Pascu L
US healthcare network integrates TensorMark's AI, facial recognition to make returning to work safer.
//www.biometricupdate.com/202005/u-s-healthcare-network-integrates-tensormarks-ai-facial-recognition-to-make-returning-to-work-safer
Date: May 22, 2020
Date accessed: July 7, 2020
- Google Scholar
For all its promise, ambient intelligence in health-care settings comes with a spectrum of ethical concerns that set it apart from other machine learning applications in health care. The continuous collection and storage of large amounts of sensing data involving various participants in different contexts, and the potential combination of many different types of data for analysis, raises issues of privacy, data protection, informed consent, and fairness that might not be easily addressed through existing ethical and regulatory frameworks.
- Ahonen P
- Alahuhta P
- Daskala B
Introduction.
in: Wright D Gutwirth S Friedewald M Vildjiounaite E Punie Y Safeguards in a world of ambient intelligence. Springer, London2008: 1-9
- Crossref
- Scopus (1)
- Google Scholar
- Nebeker C
- Torous J
- Bartlett Ellis RJ
Building the case for actionable ethics in digital health research supported by artificial intelligence.
BMC Med. 2019; 17: 137
- Crossref
- PubMed
- Scopus (56)
- Google Scholar
- Martinez-Martin N
What are important ethical implications of using facial recognition technology in health care?.
AMA J Ethics. 2019; 21: E180-E187
- Crossref
- PubMed
- Scopus (25)
- Google Scholar
Developing ambient intelligence algorithms
To identify potential privacy and ethical issues that arise with ambient intelligence in health-care settings, it is first important to understand how these algorithms are developed. Learning-based ambient intelligence methods use data acquired from various ambient sensors and then apply machine learning and computer vision algorithms to identify specified patterns (including human behaviours in the videos) or to recognise speech in the audio.
- Sanchez D
- Tentori M
- Favela J
Activity recognition for the smart hospital.
IEEE Intell Syst. 2008; 23: 50-57
- Crossref
- Scopus (158)
- Google Scholar
TableStages of designing and implementing algorithms for ambient intelligence use in health-care settings
ActivitiesKey pointsEthical issuesStage 1: framing the problemDecide what the statistical model should achieveArticulate the desired outcome, which also shapes what data will be neededSetting up a project to achieve relevant goals and avoid problematic biasStage 2: data collectionInclusion and exclusion of dataIncluding relevant data and avoiding an approach that reinforces existing prejudice and biases in the context of the problem; includes the issue of primary use (whether data were generated or collected specifically for the algorithm) and secondary use (whether data from another source are being repurposed)Avoiding problematic bias; privacy; consentStage 3: training and validating the algorithmAnnotation—ie, activities and behaviours are labelledQuality requirements for the image or sensory data will be determined by the behaviour or action of interestPrivacy; fairness and biasStage 4: testingAssess computer performance in applying a label to input data (eg, image or video)Could require annotation to be done again by peoplePrivacy; liabilityStage 5: deploymentValidated algorithm deployed in the care settingImage or other sensory data are assessed only by the algorithm, with no labelling being done by peoplePrivacy; achieving appropriate care decisions; avoiding misinterpretation and bias; liabilityStage 6: long-term useAmbient intelligence system used to collect dataContinuous monitoring by the sensor is required; use of ambient intelligence affects health-care decision makingFairness; privacy and surveillance; effect on clinical relationship; effect on health-care employer–employee relationships; potential for misuse- Open table in a new tab
Just as a student who has seen the test questions before an examination might get an artificially high score, algorithms tend to do unexpectedly well if their performance is evaluated using the same data that was used to train them. To ensure trained models can generalise to unseen data, researchers use a separate labelled validation dataset during the training stage. The validation dataset is like online practice exams; it is repeatedly used to evaluate and tune the algorithm during the training process. Once the algorithm has achieved a satisfactory score on the validation dataset, it is evaluated against the test dataset (stage 4); this is like the final exam, in which the algorithm is run against never-before-seen data, and its final performance characteristics are reported.
In most commonly used implementations of machine learning (ie, supervised machine learning), successful training, validation, and testing are only possible with labelled data, and in large amounts. Annotation is the process of labelling activities or behaviours of interest, and is a manual process in which a person has to look back at the data to determine if, when, and where an activity of interest is occurring in the data.
- Hanbury A
A survey of methods for image annotation.
J Vis Lang Comput. 2008; 19: 617-627
- Crossref
- Scopus (116)
- Google Scholar
The next stage in the process is deployment in the target health-care setting (stage 5). In the deployed stage, the sensory data are generally subject to assessment only by the algorithm, which can be used to provide direct interventions for quality improvement or to assist clinicians in making decisions. Although active learning methods can be used to create machine learning algorithms that can receive feedback loops from the experts (ie, clinicians in health-care settings), such algorithms are rare in the application of ambient intelligence to health-care environments.
- Settles B
Active learning literature survey.
//research.cs.wisc.edu/techreports/2009/TR1648.pdf
Date: Jan 9, 2009
Date accessed: June 25, 2020
- Google Scholar
Sun B, Feng J, Saenko K. Return of frustratingly easy domain adaptation. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016; Phoenix, AZ, USA; Feb 12–17, 2016 (pp 2058–65).
- Google Scholar
- Heaven WD
Google's medical AI was super accurate in a lab. Real life was a different story.
MIT Technology Review. Apr 27, 2020;
//www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease
Date accessed: May 25, 2020
- Google Scholar
A deployed algorithm is necessary but usually not sufficient to derive benefit from ambient intelligence—the output of that algorithm must be connected to some clinical workflow or action. Does the output of the algorithm automatically result in a decision or action, or is there a person in the loop who is shown a notification and must then decide on how to act, in which case is it better to err on the side of alerting too much or too little? What are the clinical or operational metrics that measure success? These questions form the bridge between a technically high-performing algorithm and an actual benefit to patients or other stakeholders. The deployment stage raises questions regarding how to test the algorithms in the clinical environment. In the USA, if the sensors are integrated with the algorithms, they might be classified as medical devices, and thus subject to regulation by the US Food and Drug Administration.
As our understanding of how to develop this technology improves, the list of actions or behaviours of interest to the research community might also grow. Previously annotated images can be reannotated to discern an additional set of activities for labelling (or researchers might be interested in a finer gradation of previously labelled activities). Furthermore, building increasingly large databases of labelled images could improve the performance of algorithms over time.
The development of ambient intelligence also requires engagement with various ethical issues at each stage of the research process. Broad ethical frameworks for artificial intelligence and machine learning usage already exist. It will be important to go beyond lists of broader principles to develop tools and processes for ambient intelligence usage that incorporate active and ongoing reflection and engagement with ethical issues in the design and development of these applications—for example, identifying the stages of development at which to engage stakeholders' perspectives or incorporate ethical consultation. Ethical issues in the stages of ambient intelligence development and use in health-care settings are summarised in the , and described in further detail in the following sections.
Ethical issues
Privacy
Researchers developing ambient intelligence applications need to carefully consider various aspects of the project, including the settings in which sensing data will be collected, the types of information that could be captured by the sensors, the inferences that might be drawn from that information, and what design measures might be needed to protect that information, especially given that efforts to deidentify information cannot be as complete as is sometimes imagined. In the USA, privacy interests are protected under constitutional law, a variety of federal and state statutes and regulations, and by cultural norms and professional ethics.
- Allen AL
Privacy.
in: LaFollette H The Oxford handbook of practical ethics. Oxford University Press, Oxford2005: 485-513
- Google Scholar
Privacy is a concept that incorporates a range of rights and obligations meant to protect an individual from unwanted intrusions or interferences into their personal domain.
- Rothstein MA
Health privacy in the electronic age.
J Leg Med. 2007; 28: 487-501
- Crossref
- PubMed
- Scopus (27)
- Google Scholar
- Alpert SA
Protecting medical privacy: challenges in the age of genetic information.
J Soc Issues. 2003; 59: 301-322
- Crossref
- Scopus (32)
- Google Scholar
- Alpert S
Health care information: confidentiality, access, and good practice.
in: Goodman KW Ethics, computing, and medicine: informatics and the transformation of healthcare. Cambridge University Press, New York1998: 75-101
- Google Scholar
- Allen A
Understanding privacy: the basics.
in: Gilbert F Kennedy JB Schwartz PM Smedinghoff TJ Seventh annual institute on privacy law: evolving laws and practices in a security-driven world. volume 1. Practising Law Institute, New York, NY, USA2006: 23-33
- Google Scholar
- Stanley KG
- Osgood ND
The potential of sensor-based monitoring as a tool for health care, health promotion, and research.
Ann Fam Med. 2011; 9: 296-298
- Crossref
- PubMed
- Scopus (27)
- Google Scholar
- Wachter S
- Mittelstadt B
A right to reasonable inferences: re-thinking data protection law in the age of big data and AI.
Oxford Business Law Blog. Oct 9, 2018;
//papers.ssrn.com/abstract=3248829
Date accessed: May 7, 2019
- Google Scholar
Informational privacy is not the only type of privacy concern that could be an issue. Ambient sensors could be placed in patients' homes or many health-care settings that patients and hospital staff, caregivers, family, and others might ordinarily expect to be free of monitoring devices. Some people might want to restrict when a third party is able to view particular parts of their bodies or monitor them in a vulnerable state, such as when they are going to the bathroom. The rights of an individual to make decisions about their own care and activity, without undue interference from government or unauthorised people, is a different aspect of privacy, sometimes referred to as decisional privacy.
- Allen A
Coercing privacy.
//scholarship.law.upenn.edu/faculty_scholarship/803
Date: March 1, 1999
Date accessed: October 2, 2020
- Google Scholar
Privacy is a value that presents trade-offs with other values and considerations in a project. The type of project (eg, research versus quality improvement) is relevant to the ethical framework used to assess such trade-offs. For example, using thermal imaging instead of full video capture can obscure the identity of participants, but this must be weighed against other goals, such as whether thermal imaging can adequately capture the features relevant to the scientific goals of a project. Privacy provisions in medical research generally balance individual privacy protection with the need to promote data sharing for scientific purposes. It is important that ambient intelligence researchers collecting data from ambient sensors are able to clearly articulate the benefits to be derived from the research, to facilitate the assessment of how those benefits balance against risks to privacy and to formulate measures to preserve participants' privacy accordingly.
It should not simply be assumed that patients or other participants value informational privacy over other types of privacy or the potential scientific benefits from allowing the collection of some of their personal information. There are indications that people might be willing to share personal information if they feel it is for the benefit of science.
- Mello MM
- Lieou V
- Goodman SN
Clinical trial participants' views of the risks and benefits of data sharing.
N Engl J Med. 2018; 378: 2202-2211
- Crossref
- PubMed
- Scopus (104)
- Google Scholar
Choices regarding the context of the project, and the types of stakeholder involved, effect which laws and regulations will be relevant for preserving privacy. The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule generally applies to projects sponsored by health-care organisations in the USA. For projects classified as human subjects research, privacy and data protection measures are required as part of the ethical conduct of research. If the activities of medical or other health-care students' activities are captured by the sensors, privacy protections under the Family Educational Rights and Privacy Act of 1974 might also be applicable to their participation. The applicability of state and local data privacy or biometric statutes should be ascertained for a project. For example, the California Consumer Privacy Act provides for consumer rights in relation to the access, sharing, and deletion of personal information collected by businesses, which can apply to some health-care settings and data.
- Marotta KA
Complying with the California Consumer Privacy Act: are health care organizations “home free”?.
The National Law Review. April 4, 2019;
//www.natlawreview.com/article/complying-california-consumer-privacy-act-are-health-care-organizations-home-free
Date accessed: May 15, 2020
- Google Scholar
- Davis J
Europe's GDPR privacy law is coming: here's what US health orgs need to know.
Healthcare IT News. March 21, 2018;
//www.healthcareitnews.com/news/europes-gdpr-privacy-law-coming-heres-what-us-health-orgs-need-know
Date accessed: September 29, 2020
- Google Scholar
- Miliard M
European perspective: how hospitals should be approaching GDPR compliance.
Healthcare IT News. Dec 11, 2018;
//www.healthcareitnews.com/news/european-perspective-how-hospitals-should-be-approaching-gdpr-compliance
Date accessed: September 29, 2020
- Google Scholar
The HIPAA Privacy Rule requires informed consent, or a waiver of authorisation or documentation of informed consent, to use protected health information for specific research studies,
45 Code of Federal Regulations Part 46 § 164·501, § 164·508, § 164·512(i).
//www.law.cornell.edu/cfr/text/45/164.501
Date accessed: December 10, 2020
- Google Scholar
45 Code of Federal Regulations Part 46 § 102(f)(2).
//www.law.cornell.edu/cfr/text/45/46.102
Date accessed: December 10, 2020
- Google Scholar
US Department of LaborEmployee Benefits Security Administration
Health Insurance Portability and Accountability Act (HIPAA).
//www.dol.gov/agencies/ebsa/laws-and-regulations/laws/hipaa
Date: 2020
Date accessed: September 22, 2020
- Google Scholar
US Department of Health & Human Services
Guidance regarding methods for de-identification of protected health information in accordance with the Health Insurance Portability and Accountability Act (HIPAA) privacy rule.
//www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/index.html
Date: Sept 7, 2012
Date accessed: September 22, 2020
- Google Scholar
It is important to note that the risk of reidentification of data cannot be completely eliminated.
- Rocher L
- Hendrickx JM
- de Montjoye Y-A
Estimating the success of re-identifications in incomplete datasets using generative models.
Nat Commun. 2019; 103069
- Crossref
- PubMed
- Scopus (259)
- Google Scholar
- Culnane C
- Rubinstein BIP
- Teague V
Health data in an open world.
arXiv. 2017; (published online Dec 15.) (preprint)
//arxiv.org/abs/1712.05627
- Google Scholar
- Na L
- Yang C
- Lo C-C
- Zhao F
- Fukuoka Y
- Aswani A
Feasibility of reidentifying individuals in large national physical activity data sets from which protected health information has been removed with use of machine learning.
JAMA Netw Open. 2018; 1e186040
- Crossref
- PubMed
- Scopus (52)
- Google Scholar
- Yoo JS
- Thaler A
- Sweeney L
- Zang J
Risks to patient privacy: a re-identification of patients in Maine and Vermont statewide hospital data.
Technol Sci. 2018; (published online Oct 8.)
//techscience.org/a/2018100901/
- Google Scholar
- Simon GE
- Shortreed SM
- Coley RY
- et al.
Assessing and minimizing re-identification risk in research data derived from health care records.
EGEMS (Wash DC). 2019; 7: 6
- PubMed
- Google Scholar
Data management and liability
A key tenet of privacy in research on humans is stewardship of the data. Effective stewardship includes ensuring that only members of the research team have access to the study data, that members of the research team are trained in the areas of data privacy and security and have signed privacy agreements with the sponsoring institutions, and that data practices include minimising access to fully identifiable data as much as possible (eg, by substituting study identification numbers for identifiable names). Research in ambient intelligence, as projects continuously collect data, could also contribute to creating standards for data sharing and data merging, such as methods to collect data not only for ongoing processes, but also for the contexts of those processes.
- Streitz N
- Charitos D
- Kaptein M
- Böhlen M
Grand challenges for ambient intelligence and implications for design contexts and smart societies.
J Ambient Intell Smart Environ. 2019; 11: 87-107
- Crossref
- Scopus (41)
- Google Scholar
The HIPAA Privacy Rule requires covered entities to consider issues such as the technical, hardware, and software infrastructure of their data security measures, to protect health information. Privacy considerations include the careful assessment of security measures, including data storage and transfer. Data for computer vision and other sensor data constitutes a large amount of information to be stored for research purposes. In such cases, technical issues that drive the research (eg, compression, frame rate capture) could also increase or reduce these requirements. Data encryption is a crucial element of protecting patient privacy. New technology (eg, edge computing) can allow encryption before data is transmitted from the computer vision camera to the data storage destination (eg, local server or protected cloud environment). Given the storage requirements, careful consideration is required about how long the raw data will be maintained. At the research stage, this length of time will be driven by the research requirements. In the production stage, institutional data retention policies based on local law might need to be developed (given the scale of raw data being collected, data retention might be challenging if patients have multiple video sensors operating continuously).
During the data annotation stage, data is sometimes sent to an outside business for annotation. HIPAA includes provisions for the sharing of protected health information with business associates. Still, it remains essential for a project to carefully consider the data security practices of the company providing data annotation services.
The raw imaging data collected by sensors could be relevant to potential legal actions to establish liability.
- Gerke S
- Yeung S
- Cohen IG
Ethical and legal aspects of ambient intelligence in hospitals.
JAMA. 2020; 323: 601-602
- Crossref
- PubMed
- Scopus (19)
- Google Scholar
National Institutes of Health
What is a Certificate of Confidentiality?.
//grants.nih.gov/policy/humansubjects/coc/what-is.htm
Date: Jan 15, 2019
Date accessed: April 29, 2020
- Google Scholar
Consent
Participants in research studies that collect data through ambient intelligence have the same rights and concerns as patients in other types of research on humans. In considering whether to participate in the project, people would need to be aware of the potential use of their data, including how their data might be used for the specific research project underway, future research efforts, and potential collaborations with other investigators (at other institutions or within industry). A description of the ambient intelligence research project needs to address potential expectations regarding the data, such as letting the patient and their family members know that the sensor data cannot be expected to provide warnings of real-time patient problems, because a substantial amount of time might pass before the capture of sensor data and its review (not all sensor data might be needed for annotation, so there should be no expectation that specific data will be reviewed). Patients should be aware that their care will not be affected by their participation in the study (unless that is the purpose of the study) or by their withdrawal from the study. Also, patients should understand that their care team is not their research team, and that data annotation will not be done by the care team.
A waiver of informed consent is permitted if the institutional review board determines that the research involves minimal risk to participants, the research cannot be practically carried out without a waiver, the waiver will not affect the rights or welfare of the participants, and (if appropriate) the participant will receive additional information regarding their participation.
45 Code of Federal Regulations Part 46 § 116(c).
//www.law.cornell.edu/cfr/text/45/46.116
Date accessed: December 10, 2020
- Google Scholar
Documentation that a project meets the requirements for a waiver of consent could be useful in settings such as an ICU or emergency department. Institutions might want to ensure patients are made aware of ambient intelligence via notices of privacy practices in their patient consent forms. For example, a hospital consent form that notifies patients about the use of their medical data might not be sufficient to constitute consent for research purposes for this type of project, so an additional consent process would be needed. Even when there are not applicable legal requirements for informed consent, it is important to provide transparency regarding the use of ambient intelligence systems in particular settings to maintain public trust and provide people with the opportunity to make decisions regarding their personal information. If facial recognition technology is used, the Association for Computing Machinery recommends providing ongoing public notice at the point of use in a format appropriate to the context.
Association for Computing Machinery
Statement on principles and prerequisites for the development, evaluation and use of unbiased facial recognition technologies.
//www.acm.org/binaries/content/assets/public-policy/ustpc-facial-recognition-tech-statement.pdf
Date: June 30, 2020
Date accessed: July 10, 2020
- Google Scholar
Fairness and bias
The potential for bias in artificial intelligence systems is a recognised challenge for the implementation of artificial intelligence systems in healthcare.
- Char DS
- Shah NH
- Magnus D
Implementing machine learning in health care—addressing ethical challenges.
N Engl J Med. 2018; 378: 981-983
- Crossref
- PubMed
- Scopus (426)
- Google Scholar
- Gianfrancesco MA
- Tamang S
- Yazdany J
- Schmajuk G
Potential biases in machine learning algorithms using electronic health record data.
JAMA Intern Med. 2018; 178: 1544-1547
- Crossref
- PubMed
- Scopus (362)
- Google Scholar
- Challen R
- Denny J
- Pitt M
- Gompels L
- Edwards T
- Tsaneva-Atanasova K
Artificial intelligence, bias and clinical safety.
BMJ Qual Saf. 2019; 28: 231-237
- Crossref
- PubMed
- Scopus (253)
- Google Scholar
- Gavish Y
#2: What you need to know about ML Algorithms and why you should care.
Medium. July 25, 2017;
//medium.com/@yaelg/product-manager-guide-part-2-what-you-need-know-machine-learning-algorithms-models-data-performance-cff5a837cec2
Date accessed: April 15, 2020
- Google Scholar
- Ngiam KY
- Khor IW
Big data and machine learning algorithms for health-care delivery.
Lancet Oncol. 2019; 20: 262-273
- Summary
- Full Text
- Full Text PDF
- PubMed
- Scopus (364)
- Google Scholar
- Marcus G
Deep learning: a critical appraisal.
arXiv. 2018; (published online Jan 2.) (preprint)
//arxiv.org/abs/1801.00631
- Google Scholar
- Adeli E
- Zhao Q
- Pfefferbaum A
- et al.
Representation learning with statistical independence to mitigate bias.
arXiv. 2018; (published online Oct 8.) (preprint)
//arxiv.org/abs/1910.03676
- Google Scholar
- Courtland R
Bias detectives: the researchers striving to make algorithms fair.
Nature. 2018; 558: 357-360
- Crossref
- PubMed
- Scopus (90)
- Google Scholar
- Siau K
- Wang W
Building trust in artificial intelligence, machine learning, and robotics.
Cutter Bus Technol J. 2018; 31: 47-53
- Google Scholar
- Kaushal A
- Altman R
- Langlotz C
Geographic distribution of US cohorts used to train deep learning algorithms.
JAMA. 2020; 324: 1212-1213
- Crossref
- PubMed
- Scopus (63)
- Google Scholar
Before artificial intelligence, medical datasets and trials had a long history of bias and inadequate representation of women and people of different races and ethnicities.
- Cahan EM
- Hernandez-Boussard T
- Thadaney-Israni S
- Rubin DL
Putting the data before the algorithm in big data addressing personalized healthcare.
NPJ Digit Med. 2019; 2: 78
- Crossref
- PubMed
- Scopus (47)
- Google Scholar
- Glymour B
- Herington J
Measuring the biases that matter: the ethical and casual foundations for measures of fairness in algorithms.
in: Proceedings of the Conference on Fairness, Accountability, and Transparency 2019. Atlanta, GA, USA. Jan 29–31, 2019: 269-278
- Crossref
- Scopus (23)
- Google Scholar
- Garg S
Hospitalization rates and characteristics of patients hospitalized with laboratory-confirmed coronavirus disease 2019—COVID-NET, 14 States, March 1–30, 2020.
Morb Mortal Wkly Rep. 2020; 69: 458-464
- Crossref
- PubMed
- Scopus (0)
- Google Scholar
Bias can also result if algorithms used for a particular purpose or context are transferred to a new context, for example, if an algorithm used in an urban context is transferred to a rural context.
- Martinez-Martin N
- Dunn LB
- Roberts LW
Is it ethical to use prognostic estimates from machine learning to treat psychosis?.
AMA J Ethics. 2018; 20: E804-E811
- Crossref
- PubMed
- Scopus (10)
- Google Scholar
- Danks D
- London AJ
Algorithmic bias in autonomous systems.
in: Proceedings of the 26th International Joint Conference on Artificial Intelligence 2017. Melbourne, Australia. Aug 19–25, 2017: 4691-4697
- Crossref
- Scopus (124)
- Google Scholar
- Blonde L
- Khunti K
- Harris SB
- Meizinger C
- Skolnik NS
Interpretation and impact of real-world clinical data for the practicing clinician.
Adv Ther. 2018; 35: 1763-1774
- Crossref
- PubMed
- Scopus (241)
- Google Scholar
Association for Computing Machinery
US Technology Policy Committee urges suspension of use of facial recognition technologies.
//www.acm.org/media-center/2020/june/ustpc-issues-statement-on-facial-recognition-technologies
Date: June 30, 2020
Date accessed: July 10, 2020
- Google Scholar
Association for Computing MachineryUS Technology Policy Committee
Statement on principles and prerequisites for the development, evaluation and use of unbiased facial recognition technologies.
//www.acm.org/binaries/content/assets/public-policy/ustpc-facial-recognition-tech-statement.pdf
Date: June 30, 2020
Date accessed: July 10, 2020
- Google Scholar
Social implications
As with other artificial intelligence technologies introduced into health care, ambient intelligence is expected to have an effect on the clinical relationship.
Association for Computing Machinery
Statement on principles and prerequisites for the development, evaluation and use of unbiased facial recognition technologies.
//www.acm.org/binaries/content/assets/public-policy/ustpc-facial-recognition-tech-statement.pdf
Date: June 30, 2020
Date accessed: July 10, 2020
- Google Scholar
- Cohen IG
- Amarasingham R
- Shah A
- Xie B
- Lo B
The legal and ethical concerns that arise from using complex predictive analytics in health care.
Health Aff (Millwood). 2014; 33: 1139-1147
- Crossref
- PubMed
- Scopus (136)
- Google Scholar
- Oosthuizen RM
Smart technology, artificial intelligence, robotics and algorithms (STARA): employees’ perceptions and wellbeing in future workplaces.
in: Potgieter IL Ferreira N Coetzee M Theory, research and dynamics of career wellbeing: becoming fit for the future. Springer, Publishing, New York2019: 17-40
- Crossref
- Scopus (13)
- Google Scholar
Association for Computing MachineryUS Technology Policy Committee
Statement on principles and prerequisites for the development, evaluation and use of unbiased facial recognition technologies.
//www.acm.org/binaries/content/assets/public-policy/ustpc-facial-recognition-tech-statement.pdf
Date: June 30, 2020
Date accessed: July 10, 2020
- Google Scholar
Ambient intelligence applications need to be scrutinised for potential unintended consequences. Some ambient intelligence uses will involve the creation of new software systems that allow for long-term surveillance of individuals and their activities, and analysis of massive amounts of sensor data. It is not a stretch of the imagination to think that this type of software would be of interest to other institutions or organisations, inside or outside health care, for a purpose that might be ethically problematic, such as various applications for tracking or identifying movements. Thus, ambient intelligence projects will need to consider whether the system or software will eventually be sold to other parties (this concern should include the algorithms themselves, any technology developed in the course of the research, as well as research methods, and whether any data are included in the transfer). Unfortunately, it is not clear how much control research teams will have over the unintended consequences of their research, especially if the work produced is a paper outlining approaches to the development of machine learning tools. However, there are increasing calls for systems developers and users to be accountable for the consequences of the use and misuse of computer systems.
Use of ambient intelligence raises concerns about the increasing use of surveillance technology throughout society. Ambient intelligence in health-care settings can serve to further normalise surveillance, while minimising considerations of the burdens placed on specific groups, or society as a whole, by such practices. Moreover, it should not simply be assumed that the collection of detailed and comprehensive data on patients and activities associated with physical health will produce a scientific benefit. Ambient intelligence projects need to be developed with careful consideration of how the burdens and benefits of the research will be distributed and experienced by various stakeholders. Potential uses of ambient intelligence in health-care settings should be evaluated according to whether successful implementation will mainly benefit people of higher socioeconomic status or specific demographic groups. Additionally, because one area of focus for ambient intelligence applications is in monitoring older people in hospitals and home health-care settings, it is important to develop technology and guidance that is specific to supporting the needs of this population.
- van Hoof J
- Kort HSM
- Markopoulos P
- Soede M
Ambient intelligence, ethics and privacy.
Gerontechnology. 2007; 6: 155-163
- Google Scholar
Engaging stakeholders, including at the beginning of a project's development, is a key aspect of ethical implementation of ambient intelligence. Ahonen and colleagues
- Ahonen P
- Alahuhta P
- Daskala B
Recommendations for stakeholders.
in: Wright D Gutwirth S Friedewald M Vildjiounaite E Punie Y Safeguards in a world of ambient intelligence. Springer, London2008: 253-265
- Crossref
- Scopus (1)
- Google Scholar
- Reeves B
- Ram N
- Robinson TN
- et al.
Screenomics: a framework to capture and analyze personal life experiences and the ways that technology shapes them.