Deep learning algorithms and their relevance: A review
DOI:
https://doi.org/10.59461/ijdiic.v2i4.78Keywords:
Neural networks, Convolutional Neural network, Recurrent Neural Network, Long Short-Term Memory, Generative Adversarial NetworkAbstract
Nowadays, the most revolutionary area in computer science is deep learning algorithms and models. This paper discusses deep learning and various supervised, unsupervised, and reinforcement learning models. An overview of Artificial neural network(ANN), Convolutional neural network(CNN), Recurrent neural network (RNN), Long short-term memory(LSTM), Self-organizing maps(SOM), Restricted Boltzmann machine(RBM), Deep Belief Network (DBN), Generative adversarial network(GAN), autoencoders, long short-term memory(LSTM), Gated Recurrent Unit(GRU) and Bidirectional-LSTM is provided. Various deep-learning application areas are also discussed. The most trending Chat GPT, which can understand natural language and respond to needs in various ways, uses supervised and reinforcement learning techniques. Additionally, the limitations of deep learning are discussed. This paper provides a snapshot of deep learning.
Downloads
References
W. G. Hatcher and W. Yu, “A Survey of Deep Learning: Platforms, Applications, and Emerging Research Trends,” IEEE Access, vol. 6, pp. 24411–24432, 2018, doi: 10.1109/ACCESS.2018.2830661.
O. I. Abiodun, A. Jantan, A. E. Omolara, K. V. Dada, N. A. Mohamed, and H. Arshad, “State-of-the-art in artificial neural network applications: A survey,” Heliyon, vol. 4, no. 11, p. e00938, Nov. 2018, doi: 10.1016/j.heliyon.2018.e00938.
R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Insights Imaging, vol. 9, no. 4, pp. 611–629, Aug. 2018, doi: 10.1007/s13244-018-0639-9.
Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, “A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects,” IEEE Trans. Neural Networks Learn. Syst., vol. 33, no. 12, pp. 6999–7019, Dec. 2022, doi: 10.1109/TNNLS.2021.3084827.
Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, “Deep learning for visual understanding: A review,” Neurocomputing, vol. 187, pp. 27–48, Apr. 2016, doi: 10.1016/j.neucom.2015.09.116.
K. Cho et al, “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation”, [Online]. Available: http://arxiv.org/abs/1406.1078
S. Li, W. Li, C. Cook, C. Zhu, and Y. Gao, “Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Jun. 2018, pp. 5457–5466. doi: 10.1109/CVPR.2018.00572.
B. Luo, Y. Fang, H. Wang, and D. Zang, “Reservoir inflow prediction using a hybrid model based on deep learning,” IOP Conf. Ser. Mater. Sci. Eng., vol. 715, no. 1, p. 012044, Jan. 2020, doi: 10.1088/1757-899X/715/1/012044.
S. R. Bin Shah, G. S. Chadha, A. Schwung, and S. X. Ding, “A Sequence-to-Sequence Approach for Remaining Useful Lifetime Estimation Using Attention-augmented Bidirectional LSTM,” Intell. Syst. with Appl., vol. 10–11, p. 200049, Jul. 2021, doi: 10.1016/j.iswa.2021.200049.
F. Pan et al., “Stacked-GRU Based Power System Transient Stability Assessment Method,” Algorithms, vol. 11, no. 8, p. 121, Aug. 2018, doi: 10.3390/a11080121.
S. Dargan, M. Kumar, M. R. Ayyagari, and G. Kumar, “A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning,” Arch. Comput. Methods Eng., vol. 27, no. 4, pp. 1071–1092, Sep. 2020, doi: 10.1007/s11831-019-09344-w.
I. H. Sarker, “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions,” SN Comput. Sci., vol. 2, no. 6, p. 420, Nov. 2021, doi: 10.1007/s42979-021-00815-1.
Q. Zhang, L. T. Yang, Z. Chen, and P. Li, “A survey on deep learning for big data,” Inf. Fusion, vol. 42, pp. 146–157, Jul. 2018, doi: 10.1016/j.inffus.2017.10.006.
S. Dong, P. Wang, and K. Abbas, “A survey on deep learning and its applications,” Comput. Sci. Rev., vol. 40, p. 100379, May 2021, doi: 10.1016/j.cosrev.2021.100379.
A. Fischer and C. Igel, “An Introduction to Restricted Boltzmann Machines,” 2012, pp. 14–36. doi: 10.1007/978-3-642-33275-3_2.
Yuming Hua, Junhai Guo, and Hua Zhao, “Deep Belief Networks and deep learning,” in Proceedings of 2015 International Conference on Intelligent Computing and Internet of Things, IEEE, Jan. 2015, pp. 1–4. doi: 10.1109/ICAIOT.2015.7111524.
I. Goodfellow et al., “Generative adversarial networks,” Commun. ACM, vol. 63, no. 11, pp. 139–144, Oct. 2020, doi: 10.1145/3422622.
A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative Adversarial Networks: An Overview,” IEEE Signal Process. Mag., vol. 35, no. 1, pp. 53–65, Jan. 2018, doi: 10.1109/MSP.2017.2765202.
R. S. Sutton and A. Barto, “Reinforcement learning: an introduction, Nachdruck. in Adaptive computation and machine learning,” 2014.
J. Naskath, G. Sivakamasundari, and A. A. S. Begum, “A Study on Different Deep Learning Algorithms Used in Deep Neural Nets: MLP SOM and DBN,” Wirel. Pers. Commun., vol. 128, no. 4, pp. 2913–2936, Feb. 2023, doi: 10.1007/s11277-022-10079-4.
S. Pouyanfar et al., “A Survey on Deep Learning,” ACM Comput. Surv., vol. 51, no. 5, pp. 1–36, Sep. 2019, doi: 10.1145/3234150.
E. Gawehn, J. A. Hiss, and G. Schneider, “Deep Learning in Drug Discovery,” Mol. Inform., vol. 35, no. 1, pp. 3–14, Jan. 2016, doi: 10.1002/minf.201501008.
A. Shrestha and A. Mahmood, “Review of Deep Learning Algorithms and Architectures,” IEEE Access, vol. 7, pp. 53040–53065, 2019, doi: 10.1109/ACCESS.2019.2912200.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Nisha.C.M, N. Thangarasu

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.