Data Quality, Bias, and Strategic Challenges in Reinforcement Learning for Healthcare: A Survey

Authors

  • Atta Ur Rahman Riphah Institute of Systems Engineering, Riphah International University, Islamabad, 46000, Pakistan
  • Bibi Saqia Department of Computer Science, University of Science and Technology, Bannu, 28100, Pakistan https://orcid.org/0009-0002-4613-5771
  • Yousef S. Alsenani Department of Information Science, FCIT Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia https://orcid.org/0000-0001-5059-6277
  • Inam Ullah Department of Computer Engineering, Gachon University, Seongnam, 13120, Republic of Korea https://orcid.org/0000-0002-5879-569X

DOI:

https://doi.org/10.59461/ijdiic.v3i3.128

Keywords:

Bias Issues, Data Quality, Healthcare Applications, Reinforcement Learning , Strategic Obstacles

Abstract

Data quality is a critical aspect of data analytics since it directly influences the accuracy and effectiveness of insights and predictions generated from data. Artificial Intelligence (AI) schemes have grown in the existing era of technological advancement, which provides innovative exposure to healthcare applications. Reinforcement Learning (RL) is a subfield and an influential Machine Learning (ML) model aimed at optimizing decision-making by association with dynamic environments. In healthcare applications, RL can modify conduct strategies, enhance source application, and improve patient investigation history by using various data modalities. The worth of the data quality regulates how effective RL is in healthcare applications. In healthcare, the model predictions have a direct impact on patient's lives, and poor data quality often leads to wrong evaluations that expose patient safety and treatment quality. Biases in data quality have also presented a challenging influence on the RL model's effectiveness and accuracy. RL models have enormous potential in healthcare; however, various strategic limitations prevent their widespread acceptance and deployment. The implementation of RL in healthcare faces serious issues, mostly around data quality, bias, and tactical difficulties. This study delivers a broad survey of these challenges, emphasizing how imbalanced, imperfect, and biased data can affect the generalizability and performance of RL models. We critically assessed the sources of data bias, comprising demographic imbalances and irregularities in electronic health records (EHRs), and their impact on RL algorithms. This survey aims to present a detailed study of the complex circumstances relating to data quality, data biases, and strategic barriers in RL models deploying in healthcare applications. However, the main contribution of the proposed study is that it provides a systematic review of these challenges and delivers a roadmap for future work intended to refine the consistency, fairness, and scalability of RL in healthcare sectors.

Downloads

Download data is not yet available.

References

A. Haleem, M. Javaid, and I. H. Khan, “Current status and applications of Artificial Intelligence (AI) in medical field: An overview,” Curr. Med. Res. Pract., vol. 9, no. 6, pp. 231–237, Nov. 2019, doi: 10.1016/j.cmrp.2019.11.005.

A. Gosavi, “Reinforcement Learning: A Tutorial Survey and Recent Advances,” INFORMS J. Comput., vol. 21, no. 2, pp. 178–192, May 2009, doi: 10.1287/ijoc.1080.0305.

Z. Ahmed, K. Mohamed, S. Zeeshan, and X. Dong, “Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine,” Database, vol. 2020, Jan. 2020, doi: 10.1093/database/baaa010.

C. Yu, J. Liu, S. Nemati, and G. Yin, “Reinforcement Learning in Healthcare: A Survey,” ACM Comput. Surv., vol. 55, no. 1, pp. 1–36, Jan. 2023, doi: 10.1145/3477600.

B. Velichkovska, D. Denkovski, H. Gjoreski, M. Kalendar, and V. Osmani, “A Survey of Bias in Healthcare: Pitfalls of Using Biased Datasets and Applications,” 2023, pp. 570–584. doi: 10.1007/978-3-031-35314-7_50.

A. Fachantidis, M. Taylor, and I. Vlahavas, “Learning to Teach Reinforcement Learning Agents,” Mach. Learn. Knowl. Extr., vol. 1, no. 1, pp. 21–42, Dec. 2017, doi: 10.3390/make1010002.

M. Siddiq, “Integration of Machine Learning in Clinical Decision Support Systems,” Eduvest - J. Univers. Stud., vol. 1, no. 12, pp. 1579–1591, Dec. 2021, doi: 10.59188/eduvest.v1i12.809.

H. Javed, H. A. Muqeet, T. Javed, A. U. Rehman, and R. Sadiq, “Ethical Frameworks for Machine Learning in Sensitive Healthcare Applications,” IEEE Access, vol. 12, pp. 16233–16254, 2024, doi: 10.1109/ACCESS.2023.3340884.

B. Smith, A. Khojandi, and R. Vasudevan, “Bias in Reinforcement Learning: A Review in Healthcare Applications,” ACM Comput. Surv., vol. 56, no. 2, pp. 1–17, Feb. 2024, doi: 10.1145/3609502.

J. Yang, A. A. S. Soltan, D. W. Eyre, and D. A. Clifton, “Algorithmic fairness and bias mitigation for clinical machine learning with deep reinforcement learning,” Nat. Mach. Intell., vol. 5, no. 8, pp. 884–894, Jul. 2023, doi: 10.1038/s42256-023-00697-3.

D. S. Char, M. D. Abràmoff, and C. Feudtner, “Identifying Ethical Considerations for Machine Learning Healthcare Applications,” Am. J. Bioeth., vol. 20, no. 11, pp. 7–17, Nov. 2020, doi: 10.1080/15265161.2020.1819469.

S. Ellahham, N. Ellahham, and M. C. E. Simsekler, “Application of Artificial Intelligence in the Health Care Safety Context: Opportunities and Challenges,” Am. J. Med. Qual., vol. 35, no. 4, pp. 341–348, Jul. 2020, doi: 10.1177/1062860619878515.

K. Naithani and S. Tiwari, “Deep Learning for the Intersection of Ethics and Privacy in Healthcare,” 2023, pp. 154–191. doi: 10.4018/978-1-6684-8531-6.ch008.

A. Coronato, M. Naeem, G. De Pietro, and G. Paragliola, “Reinforcement learning for intelligent healthcare applications: A survey,” Artif. Intell. Med., vol. 109, p. 101964, Sep. 2020, doi: 10.1016/j.artmed.2020.101964.

P. Lambin et al., “Radiomics: the bridge between medical imaging and personalized medicine,” Nat. Rev. Clin. Oncol., vol. 14, no. 12, pp. 749–762, Dec. 2017, doi: 10.1038/nrclinonc.2017.141.

B. Leff, “Hospital at Home: Feasibility and Outcomes of a Program To Provide Hospital-Level Care at Home for Acutely Ill Older Patients,” Ann. Intern. Med., vol. 143, no. 11, p. 798, Dec. 2005, doi: 10.7326/0003-4819-143-11-200512060-00008.

O. Gottesman et al., “Guidelines for reinforcement learning in healthcare,” Nat. Med., vol. 25, no. 1, pp. 16–18, Jan. 2019, doi: 10.1038/s41591-018-0310-5.

T. Li, Z. Wang, W. Lu, Q. Zhang, and D. Li, “Electronic health records based reinforcement learning for treatment optimizing,” Inf. Syst., vol. 104, p. 101878, Feb. 2022, doi: 10.1016/j.is.2021.101878.

A. Veena and S. Gowrishankar, “Healthcare Analytics: Overcoming the Barriers to Health Information Using Machine Learning Algorithms,” 2021, pp. 484–496. doi: 10.1007/978-3-030-51859-2_44.

A. L. Beam and I. S. Kohane, “Big Data and Machine Learning in Health Care,” JAMA, vol. 319, no. 13, p. 1317, Apr. 2018, doi: 10.1001/jama.2017.18391.

A. E. W. Johnson et al., “MIMIC-III, a freely accessible critical care database,” Sci. Data, vol. 3, no. 1, p. 160035, May 2016, doi: 10.1038/sdata.2016.35.

Z. Chkirbene, R. Hamila, D. Unal, M. Gabbouj, and M. Hamdi, “Enhancing Healthcare Systems With Deep Reinforcement Learning: Insights Into D2D Communications and Remote Monitoring,” IEEE Open J. Commun. Soc., vol. 5, pp. 3824–3838, 2024, doi: 10.1109/OJCOMS.2024.3412963.

Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, “Dissecting racial bias in an algorithm used to manage the health of populations,” Science (80-. )., vol. 366, no. 6464, pp. 447–453, Oct. 2019, doi: 10.1126/science.aax2342.

W. Chen, X. Qiu, T. Cai, H.-N. Dai, Z. Zheng, and Y. Zhang, “Deep Reinforcement Learning for Internet of Things: A Comprehensive Survey,” IEEE Commun. Surv. Tutorials, vol. 23, no. 3, pp. 1659–1692, 2021, doi: 10.1109/COMST.2021.3073036.

A. Rajkomar, M. Hardt, M. D. Howell, G. Corrado, and M. H. Chin, “Ensuring Fairness in Machine Learning to Advance Health Equity,” Ann. Intern. Med., vol. 169, no. 12, p. 866, Dec. 2018, doi: 10.7326/M18-1990.

E. H. Shortliffe and M. J. Sepúlveda, “Clinical Decision Support in the Era of Artificial Intelligence,” JAMA, vol. 320, no. 21, p. 2199, Dec. 2018, doi: 10.1001/jama.2018.17163.

E. J. Topol, “High-performance medicine: the convergence of human and artificial intelligence,” Nat. Med., vol. 25, no. 1, pp. 44–56, Jan. 2019, doi: 10.1038/s41591-018-0300-7.

M. Mahmud, M. S. Kaiser, A. Hussain, and S. Vassanelli, “Applications of Deep Learning and Reinforcement Learning to Biological Data,” IEEE Trans. Neural Networks Learn. Syst., vol. 29, no. 6, pp. 2063–2079, Jun. 2018, doi: 10.1109/TNNLS.2018.2790388.

R. K. Behera, P. K. Bala, and A. Dhir, “The emerging role of cognitive computing in healthcare: A systematic literature review,” Int. J. Med. Inform., vol. 129, pp. 154–166, Sep. 2019, doi: 10.1016/j.ijmedinf.2019.04.024.

B. Singh, D. Lopez, and R. Ramadan, “Internet of things in Healthcare: a conventional literature review,” Health Technol. (Berl)., vol. 13, no. 5, pp. 699–719, Sep. 2023, doi: 10.1007/s12553-023-00771-1.

N. Gandhi, and S. Mishra, “Applications of Reinforcement learning for Medical Decision Making,” RTA-CSIT, pp. 1–5, 2021.

M. C. Liefaard et al., “The Way of the Future: Personalizing Treatment Plans Through Technology,” Am. Soc. Clin. Oncol. Educ. B., no. 41, pp. 12–23, Jun. 2021, doi: 10.1200/EDBK_320593.

A. U. Rahman, F. Al-Obeidat, A. Tubaishat, B. Shah, S. Anwar, and Z. Halim, “Discovering the Correlation Between Phishing Susceptibility Causing Data Biases and Big Five Personality Traits Using C-GAN,” IEEE Trans. Comput. Soc. Syst., vol. 11, no. 4, pp. 4800–4808, Aug. 2024, doi: 10.1109/TCSS.2022.3201153.

K. Kerr, T. Norris, and R. Stockdale, “Data quality information and decision making: a healthcare case study,” ACIS 2007 Proc., pp. 1017–1026, 2007.

N. Norori, Q. Hu, F. M. Aellen, F. D. Faraci, and A. Tzovara, “Addressing bias in big data and AI for health care: A call for open science,” Patterns, vol. 2, no. 10, p. 100347, Oct. 2021, doi: 10.1016/j.patter.2021.100347.

S. Abbasi-Sureshjani, R. Raumanns, B. E. J. Michels, G. Schouten, and V. Cheplygina, “Risk of Training Diagnostic Algorithms on Data with Demographic Bias,” 2020, pp. 183–192. doi: 10.1007/978-3-030-61166-8_20.

H. Zhang, A. X. Lu, M. Abdalla, M. McDermott, and M. Ghassemi, “Hurtful words,” in Proceedings of the ACM Conference on Health, Inference, and Learning, New York, NY, USA: ACM, Apr. 2020, pp. 110–120. doi: 10.1145/3368555.3384448.

X. Sun, Z. Sun, T. Wang, J. Feng, J. Wei, and G. Hu, “A Privacy‐Preserving Reinforcement Learning Approach for Dynamic Treatment Regimes on Health Data,” Wirel. Commun. Mob. Comput., vol. 2021, no. 1, Jan. 2021, doi: 10.1155/2021/8952219.

D. G. L. D.D. Terris, “Data quality bias: an underrecognized source of misclassification in pay-for-performance reporting?,” Qual. Manag. Healthc., vol. 17, no. 1, pp. 19–26, 2008.

J. Zajas, “Obstacles to Real Strategic Marketing in Health Care,” J. Hosp. Mark., vol. 8, no. 2, pp. 19–31, Aug. 1994, doi: 10.1300/J043v08n02_04.

K. K. Ganju, H. Atasoy, J. McCullough, and B. Greenwood, “The Role of Decision Support Systems in Attenuating Racial Biases in Healthcare Delivery,” Manage. Sci., vol. 66, no. 11, pp. 5171–5181, Nov. 2020, doi: 10.1287/mnsc.2020.3698.

A. A. Abdellatif, N. Mhaisen, A. Mohamed, A. Erbad, and M. Guizani, “Reinforcement Learning for Intelligent Healthcare Systems: A Review of Challenges, Applications, and Open Research Issues,” IEEE Internet Things J., vol. 10, no. 24, pp. 21982–22007, Dec. 2023, doi: 10.1109/JIOT.2023.3288050.

J. Mulani, S. Heda, K. Tumdi, J. Patel, H. Chhinkaniwala, and J. Patel, “Deep Reinforcement Learning Based Personalized Health Recommendations,” 2020, pp. 231–255. doi: 10.1007/978-3-030-33966-1_12.

M. Javaid, A. Haleem, R. Pratap Singh, R. Suman, and S. Rab, “Significance of machine learning in healthcare: Features, pillars and applications,” Int. J. Intell. Networks, vol. 3, pp. 58–73, 2022, doi: 10.1016/j.ijin.2022.05.002.

Y. Liu, B. Logan, N. Liu, Z. Xu, J. Tang, and Y. Wang, “Deep Reinforcement Learning for Dynamic Treatment Regimes on Medical Registry Data,” in 2017 IEEE International Conference on Healthcare Informatics (ICHI), IEEE, Aug. 2017, pp. 380–385. doi: 10.1109/ICHI.2017.45.

S. Levine, A. Kumar, G. Tucker, and J. Fu, “Offline reinforcement learning: Tutorial, review, and perspectives on open problems,” arXiv, pp. 1–43, 2020.

T. A. Shaikh, S. Hakak, T. Rasool, and M. Wasid, Machine Learning and Artificial Intelligence in Healthcare Systems. Boca Raton: CRC Press, 2022. doi: 10.1201/9781003265436.

S. Mistry, L. Wang, Y. Islam, and F. A. J. Osei, “A Comprehensive Study on Healthcare Datasets Using AI Techniques,” Electronics, vol. 11, no. 19, p. 3146, Sep. 2022, doi: 10.3390/electronics11193146.

R. Lenz and M. Reichert, “IT support for healthcare processes – premises, challenges, perspectives,” Data Knowl. Eng., vol. 61, no. 1, pp. 39–58, Apr. 2007, doi: 10.1016/j.datak.2006.04.007.

P.-T. Chen, C.-L. Lin, and W.-N. Wu, “Big data management in healthcare: Adoption challenges and implications,” Int. J. Inf. Manage., vol. 53, p. 102078, Aug. 2020, doi: 10.1016/j.ijinfomgt.2020.102078.

S. E. Madnick, R. Y. Wang, Y. W. Lee, and H. Zhu, “Overview and Framework for Data and Information Quality Research,” J. Data Inf. Qual., vol. 1, no. 1, pp. 1–22, Jun. 2009, doi: 10.1145/1515693.1516680.

D. P. Gopal, U. Chetty, P. O’Donnell, C. Gajria, and J. Blackadder-Weinstein, “Implicit bias in healthcare: clinical practice, research and decision making,” Futur. Healthc. J., vol. 8, no. 1, pp. 40–48, Mar. 2021, doi: 10.7861/fhj.2020-0233.

R. M. Kaplan and D. L. Frosch, “Decision Making in Medicine and Health Care,” Annu. Rev. Clin. Psychol., vol. 1, no. 1, pp. 525–556, Apr. 2005, doi: 10.1146/annurev.clinpsy.1.102803.144118.

M. M. Assadullah, “Barriers to Artificial Intelligence Adoption in Healthcare Management: A Systematic Review,” SSRN Electron. J., 2019, doi: 10.2139/ssrn.3530598.

D. Pal, T. Chen, S. Zhong, and P. Khethavath, “Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data,” JMIR Med. Informatics, vol. 2, no. 1, p. e2, Jan. 2014, doi: 10.2196/medinform.3090.

S. Swain, B. Bhushan, G. Dhiman, and W. Viriyasitavat, “Appositeness of Optimized and Reliable Machine Learning for Healthcare: A Survey,” Arch. Comput. Methods Eng., vol. 29, no. 6, pp. 3981–4003, Oct. 2022, doi: 10.1007/s11831-022-09733-8.

Y. Jia, T. Lawton, J. Burden, J. McDermid, and I. Habli, “Safety-driven design of machine learning for sepsis treatment,” J. Biomed. Inform., vol. 117, p. 103762, May 2021, doi: 10.1016/j.jbi.2021.103762.

E. G. G. Verdaasdonk, L. P. S. Stassen, P. P. Widhiasmara, and J. Dankelman, “Requirements for the design and implementation of checklists for surgical processes,” Surg. Endosc., vol. 23, no. 4, pp. 715–726, Apr. 2009, doi: 10.1007/s00464-008-0044-4.

J. Mariño, E. Kasbohm, S. Struckmann, L. A. Kapsner, and C. O. Schmidt, “R Packages for Data Quality Assessments and Data Monitoring: A Software Scoping Review with Recommendations for Future Developments,” Appl. Sci., vol. 12, no. 9, p. 4238, Apr. 2022, doi: 10.3390/app12094238.

P. Swazinna, S. Udluft, and T. Runkler, “Measuring Data Quality for Dataset Selection in Offline Reinforcement Learning,” in 2021 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Dec. 2021, pp. 1–8. doi: 10.1109/SSCI50451.2021.9660006.

O. O. Madandola et al., “The relationship between electronic health records user interface features and data quality of patient clinical information: an integrative review,” J. Am. Med. Informatics Assoc., vol. 31, no. 1, pp. 240–255, Dec. 2023, doi: 10.1093/jamia/ocad188.

V. Gudivada, A. Apon, and J. Ding, “Data quality considerations for big data and machine learning: Going beyond data cleaning and transformations,” Int. J. Adv. Softw., vol. 10, no. 1, pp. 1–20, 2017.

E. M. Chang, E. F. Gillespie, and N. Shaverdian, “Truthfulness in patient-reported outcomes: factors affecting patients’ responses and impact on data quality,” Patient Relat. Outcome Meas., vol. Volume 10, pp. 171–186, Jun. 2019, doi: 10.2147/PROM.S178344.

C. N. Ta, and C. Weng, “Detecting systemic data quality issues in electronic health records,” Stud. Health Technol. Inform., vol. 264, pp. 383–387, 2019.

S. Juddoo, C. George, P. Duquenoy, and D. Windridge, “Data Governance in the Health Industry: Investigating Data Quality Dimensions within a Big Data Context,” Appl. Syst. Innov., vol. 1, no. 4, p. 43, Nov. 2018, doi: 10.3390/asi1040043.

R. Pansara, “Cultivating Data Quality to Strategies, Challenges, and Impact on Decision-Making,” Int. J. Manag. Educ. Sustain. Dev., vol. 6, no. 6, pp. 24–33, 2023.

M. Bertl, K. J. I. Kankainen, G. Piho, D. Draheim, and P. Ross, “Evaluation of Data Quality in the Estonia National Health Information System for Digital Decision Support,” HEDA, pp. 1–13, 2023.

D. Kumar Sharma, D. Sreenivasa Chakravarthi, A. Ara Shaikh, A. Al Ayub Ahmed, S. Jaiswal, and M. Naved, “The aspect of vast data management problem in healthcare sector and implementation of cloud computing technique,” Mater. Today Proc., vol. 80, pp. 3805–3810, 2023, doi: 10.1016/j.matpr.2021.07.388.

J. L. Vassy et al., “Yield and bias in defining a cohort study baseline from electronic health record data,” J. Biomed. Inform., vol. 78, pp. 54–59, Feb. 2018, doi: 10.1016/j.jbi.2017.12.017.

G. Kleinberg, M. J. Diaz, S. Batchu, and B. Lucke-Wold, “Racial underrepresentation in dermatological datasets leads to biased machine learning models and inequitable healthcare,” J. biomed Res., vol. 3, no. 1, p. 42, 2022.

M. Phelan, N. A. Bhavsar, and B. A. Goldstein, “Illustrating Informed Presence Bias in Electronic Health Records Data: How Patient Interactions with a Health System Can Impact Inference.,” EGEMS (Washington, DC), vol. 5, no. 1, p. 22, Dec. 2017, doi: 10.5334/egems.243.

M. Phelan, N. A. Bhavsar, and B. A. Goldstein, “Exploring Composite Dataset Biases for Heart Sound Classification,” AICS, pp. 1–13, 2020.

K. Mohammad Alfadli and A. Omran Almagrabi, “Feature-Limited Prediction on the UCI Heart Disease Dataset,” Comput. Mater. Contin., vol. 74, no. 3, pp. 5871–5883, 2023, doi: 10.32604/cmc.2023.033603.

R. Daneshjou, M. P. Smith, M. D. Sun, V. Rotemberg, and J. Zou, “Lack of Transparency and Potential Bias in Artificial Intelligence Data Sets and Algorithms,” JAMA Dermatology, vol. 157, no. 11, p. 1362, Nov. 2021, doi: 10.1001/jamadermatol.2021.3129.

D. Tamboli, A. Topham, N. Singh, and A. D. Singh, “Retinoblastoma: A SEER Dataset Evaluation for Treatment Patterns, Survival, and Second Malignant Neoplasms,” Am. J. Ophthalmol., vol. 160, no. 5, pp. 953–958, Nov. 2015, doi: 10.1016/j.ajo.2015.07.037.

P. Tschandl, “Risk of Bias and Error From Data Sets Used for Dermatologic Artificial Intelligence,” JAMA Dermatology, vol. 157, no. 11, p. 1271, Nov. 2021, doi: 10.1001/jamadermatol.2021.3128.

E. Tasci, Y. Zhuge, K. Camphausen, and A. V. Krauze, “Bias and Class Imbalance in Oncologic Data—Towards Inclusive and Transferrable AI in Large Scale Oncology Data Sets,” Cancers (Basel)., vol. 14, no. 12, p. 2897, Jun. 2022, doi: 10.3390/cancers14122897.

M. Hägele et al., “Resolving challenges in deep learning-based analyses of histopathological images using explanation methods,” Sci. Rep., vol. 10, no. 1, p. 6423, Apr. 2020, doi: 10.1038/s41598-020-62724-2.

A. Bissoto, M. Fornaciali, E. Valle, and S. Avila, “(De) Constructing Bias on Skin Lesion Datasets,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, Jun. 2019, pp. 2766–2774. doi: 10.1109/CVPRW.2019.00335.

M. E. Ahsen, M. U. S. Ayvaci, and S. Raghunathan, “When Algorithmic Predictions Use Human-Generated Data: A Bias-Aware Classification Algorithm for Breast Cancer Diagnosis,” Inf. Syst. Res., vol. 30, no. 1, pp. 97–116, Mar. 2019, doi: 10.1287/isre.2018.0789.

C. S. Kruse, R. Goswamy, Y. Raval, and S. Marawi, “Challenges and Opportunities of Big Data in Health Care: A Systematic Review,” JMIR Med. Informatics, vol. 4, no. 4, p. e38, Nov. 2016, doi: 10.2196/medinform.5359.

S. Datta et al., “Reinforcement learning in surgery,” Surgery, vol. 170, no. 1, pp. 329–332, Jul. 2021, doi: 10.1016/j.surg.2020.11.040.

H. Mowafi, K. Nowak, and K. Hein, “Facing the Challenges in Human Resources for Humanitarian Health,” Prehosp. Disaster Med., vol. 22, no. 5, pp. 351–359, Oct. 2007, doi: 10.1017/S1049023X00005057.

C. J. Kelly, A. Karthikesalingam, M. Suleyman, G. Corrado, and D. King, “Key challenges for delivering clinical impact with artificial intelligence,” BMC Med., vol. 17, no. 1, p. 195, Dec. 2019, doi: 10.1186/s12916-019-1426-2.

T. Ching et al., “Opportunities and obstacles for deep learning in biology and medicine,” J. R. Soc. Interface, vol. 15, no. 141, p. 20170387, Apr. 2018, doi: 10.1098/rsif.2017.0387.

E. Badidi, “Edge AI for Early Detection of Chronic Diseases and the Spread of Infectious Diseases: Opportunities, Challenges, and Future Directions,” Futur. Internet, vol. 15, no. 11, p. 370, Nov. 2023, doi: 10.3390/fi15110370.

R. I. Mukhamediev et al., “Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges,” Mathematics, vol. 10, no. 15, p. 2552, Jul. 2022, doi: 10.3390/math10152552.

R. Challen, J. Denny, M. Pitt, L. Gompels, T. Edwards, and K. Tsaneva-Atanasova, “Artificial intelligence, bias and clinical safety,” BMJ Qual. Saf., vol. 28, no. 3, pp. 231–237, Mar. 2019, doi: 10.1136/bmjqs-2018-008370.

D. S. Char, N. H. Shah, and D. Magnus, “Implementing Machine Learning in Health Care — Addressing Ethical Challenges,” N. Engl. J. Med., vol. 378, no. 11, pp. 981–983, Mar. 2018, doi: 10.1056/NEJMp1714229.

A. L’Heureux, K. Grolinger, H. F. Elyamany, and M. A. M. Capretz, “Machine Learning With Big Data: Challenges and Approaches,” IEEE Access, vol. 5, pp. 7776–7797, 2017, doi: 10.1109/ACCESS.2017.2696365.

A. Cambon-Thomsen, E. Rial-Sebbag, and B. M. Knoppers, “Trends in ethical and legal frameworks for the use of human biobanks,” Eur. Respir. J., vol. 30, no. 2, pp. 373–382, Aug. 2007, doi: 10.1183/09031936.00165006.

M. Saria Allahham, A. Awad Abdellatif, A. Mohamed, A. Erbad, E. Yaacoub, and M. Guizani, “I-SEE: Intelligent, Secure, and Energy-Efficient Techniques for Medical Data Transmission Using Deep Reinforcement Learning,” IEEE Internet Things J., vol. 8, no. 8, pp. 6454–6468, Apr. 2021, doi: 10.1109/JIOT.2020.3027048.

A. Sajid and H. Abbas, “Data Privacy in Cloud-assisted Healthcare Systems: State of the Art and Future Challenges,” J. Med. Syst., vol. 40, no. 6, p. 155, Jun. 2016, doi: 10.1007/s10916-016-0509-2.

G. Stiglic, P. Kocbek, N. Fijacko, M. Zitnik, K. Verbert, and L. Cilar, “Interpretability of machine learning‐based prediction models in healthcare,” WIREs Data Min. Knowl. Discov., vol. 10, no. 5, Sep. 2020, doi: 10.1002/widm.1379.

Y. Wakabayashi, H. Matsui, K. Ikai, M. Hayashi, H. Wakabayashi, and K. Yamamoto, “Developing a Practical Method for Validation of Computerized Systems Integrated With Smart and/or Wearable Devices for Regulatory Compliance of Clinical Trials,” Ther. Innov. Regul. Sci., vol. 51, no. 1, pp. 118–124, Jan. 2017, doi: 10.1177/2168479016666585.

H. Kupwade Patil and R. Seshadri, “Big Data Security and Privacy Issues in Healthcare,” in 2014 IEEE International Congress on Big Data, IEEE, Jun. 2014, pp. 762–765. doi: 10.1109/BigData.Congress.2014.112.

J. K. Bower, S. Patel, J. E. Rudy, and A. S. Felix, “Addressing Bias in Electronic Health Record-based Surveillance of Cardiovascular Disease Risk: Finding the Signal Through the Noise,” Curr. Epidemiol. Reports, vol. 4, no. 4, pp. 346–352, Dec. 2017, doi: 10.1007/s40471-017-0130-z.

F. Ali et al., “A smart healthcare monitoring system for heart disease prediction based on ensemble deep learning and feature fusion,” Inf. Fusion, vol. 63, pp. 208–222, Nov. 2020, doi: 10.1016/j.inffus.2020.06.008.

F. Zeshan and R. Mohamad, “Medical Ontology in the Dynamic Healthcare Environment,” Procedia Comput. Sci., vol. 10, pp. 340–348, 2012, doi: 10.1016/j.procs.2012.06.045.

Y. Carlisle, “Complexity dynamics: Managerialism and undesirable emergence in healthcare organizations,” J. Med. Mark. Device, Diagnostic Pharm. Mark., vol. 11, no. 4, pp. 284–293, Nov. 2011, doi: 10.1177/1745790411424972.

M. Ienca and E. Vayena, “On the responsible use of digital data to tackle the COVID-19 pandemic,” Nat. Med., vol. 26, no. 4, pp. 463–464, Apr. 2020, doi: 10.1038/s41591-020-0832-5.

P. E. Petersen and T. Yamamoto, “Improving the oral health of older people: the approach of the WHO Global Oral Health Programme,” Community Dent. Oral Epidemiol., vol. 33, no. 2, pp. 81–92, Apr. 2005, doi: 10.1111/j.1600-0528.2004.00219.x.

M. R. Hussain et al., “Effective cost optimization approach in Healthcare to Minimize the treatment cost of Brain-tumor Patients,” in 2019 International Conference on Computer and Information Sciences (ICCIS), IEEE, Apr. 2019, pp. 1–5. doi: 10.1109/ICCISci.2019.8716459.

Y. You and Z. Hua, “An intelligent intervention strategy for patients to prevent chronic complications based on reinforcement learning,” Inf. Sci. (Ny)., vol. 612, pp. 1045–1065, Oct. 2022, doi: 10.1016/j.ins.2022.07.080.

S. Chen et al., “Reinforcement Learning Based Diagnosis and Prediction for COVID-19 by Optimizing a Mixed Cost Function From CT Images,” IEEE J. Biomed. Heal. Informatics, vol. 26, no. 11, pp. 5344–5354, Nov. 2022, doi: 10.1109/JBHI.2022.3197666.

W. Nie et al., “Deep reinforcement learning framework for thoracic diseases classification via prior knowledge guidance,” Comput. Med. Imaging Graph., vol. 108, p. 102277, Sep. 2023, doi: 10.1016/j.compmedimag.2023.102277.

S. M. Barros Netto, V. R. Coelho Leite, A. Correa, A. C. de Paiva, and A. de Almeida Neto, “Application on Reinforcement Learning for Diagnosis Based on Medical Image,” in Reinforcement Learning, I-Tech Education and Publishing, 2008. doi: 10.5772/5291.

B. Smith, “Bias in Reinforcement Learning: Lessons Learned and Future Directions in Healthcare Applications,” researchgate.net, pp. 1–22, 2022.

D. Karimi, H. Dou, S. K. Warfield, and A. Gholipour, “Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis,” Med. Image Anal., vol. 65, p. 101759, Oct. 2020, doi: 10.1016/j.media.2020.101759.

P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger, “Deep Reinforcement Learning That Matters,” Proc. AAAI Conf. Artif. Intell., vol. 32, no. 1, Apr. 2018, doi: 10.1609/aaai.v32i1.11694.

D. Abel et al., “On the Expressivity of Markov Reward,” Nov. 2021, Available: http://arxiv.org/abs/2111.00876

Q. Liu, L. Li, Z. Tang, and D. Zhou, “Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation,” Oct. 2018, Available: http://arxiv.org/abs/1810.12429

G. Dulac-Arnold et al., “Challenges of real-world reinforcement learning: definitions, benchmarks and analysis,” Mach. Learn., vol. 110, no. 9, pp. 2419–2468, Sep. 2021, doi: 10.1007/s10994-021-05961-4.

S. Rome, T. Chen, M. Kreisel, and D. Zhou, “Lessons on off-policy methods from a notification component of a chatbot,” Mach. Learn., vol. 110, no. 9, pp. 2577–2602, Sep. 2021, doi: 10.1007/s10994-021-05978-9.

N. Karampatziakis, S. Kochman, J. Huang, P. Mineiro, K. Osborne, and W. Chen, “Lessons from Contextual Bandit Learning in a Customer Support Bot,” May 2019, Available: http://arxiv.org/abs/1905.02219

Y. Li, “Reinforcement Learning in Practice: Opportunities and Challenges,” Feb. 2022, Available: http://arxiv.org/abs/2202.11296

Downloads

Published

20-09-2024

How to Cite

Atta Ur Rahman, Bibi Saqia, Yousef S. Alsenani, & Inam Ullah. (2024). Data Quality, Bias, and Strategic Challenges in Reinforcement Learning for Healthcare: A Survey. International Journal of Data Informatics and Intelligent Computing, 3(3), 24–42. https://doi.org/10.59461/ijdiic.v3i3.128

Issue

Section

Regular Issue