Penggunaan Model Deep Learning Untuk Meningkatkan Efisiensi Dalam Aplikasi Machine Learning

Authors

  • Siswanto Siswanto Universitas Sains dan Teknologi Komputer
  • Maya Utami Dewi Universitas Sains dan Teknologi Komputer
  • Siti Kholifah Universitas Sains dan Teknologi Komputer
  • Greget Widhiati Universitas Sains dan Teknologi Komputer
  • Widya Aryani Universitas Sains dan Teknologi Komputer

DOI:

https://doi.org/10.54066/jpsi.v1i4.1619

Keywords:

Artificial Intelligence, Deep Learning, Machine Learning

Abstract

The use of deep learning models has become a major focus in optimizing the efficiency of machine learning applications. This research discusses various deep learning models that can be applied to improve efficiency in the context of machine learning applications. These models are designed to handle the complexity of machine learning tasks with a high level of accuracy while still considering aspects of computational efficiency. This article involves an in-depth look at several deep learning models that have proven effective in various application domains. Discussion includes the use of convolutional neural network (CNNs) models for image processing, recurrent neural networks (RNNs) for sequential data, and transformer-based models for natural language processing tasks. In addition, deep learning model tuning and optimization strategies, such as pruning and quantization, are also discussed to improve the efficient use of computing resources. This research identifies challenges and opportunities in integrating these deep learning models into machine learning applications with maximum efficiency. By considering the need for accuracy and limited computational resources, this research provides a holistic view of the approaches that can be applied to deal with complexity in diverse machine learning scenarios. The results are expected to provide a significant contribution to the development of efficient and effective machine learning applications.

References

Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.

Chii-Ruey Hwang. Simulated annealing: theory and applications. Acta Applicandae Mathematicae, 12(1):108–111, 1988.

Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.

Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.

Christopher A Walsh. Peter huttenlocher (1931-2013). Nature, 502(7470):172–172, 2013.

Convex optimization. In Advances in neural information processing systems, pages 2933–2941, 2014.

Han, S., Mao, H., & Dally, W. J. (2016). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In International Conference on Learning Representations (ICLR).

Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.

Huizi Mao, Song Han, Jeff Pool, Wenshuo Li, Xingyu Liu, Yu Wang, and William J Dally. Exploring the regularity of sparse structure in convolutional neural networks. arXiv preprint

Kyuyeon Hwang and Wonyong Sung. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pages 1–6. IEEE, 2014.

Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.

Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In ICML, pages 1058–1066, 2013.

Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3105–3113, 2015.

Maxwell D Collins and Pushmeet Kohli. Memory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442, 2014.

Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. arXiv preprint arXiv:1707.09405, 2017.

Shijin Zhang, Zidong Du, Lei Zhang, Huiying Lan, Shaoli Liu, Ling Li, Qi Guo, Tianshi Chen, and Yunji Chen. Cambricon-x: An accelerator for sparse neural networks. In Microarchitecture (MICRO), 2016 49th Annual IEEE/ACM International Symposium on, pages 1–12. IEEE, 2016.

Sicheng Li, Wei Wen, Yu Wang, Song Han, Yiran Chen, and Hai Li. An fpga design framework for cnn sparsification and acceleration. In Field-Programmable Custom Computing Machines (FCCM), 2017 IEEE 25th Annual International Symposium on, pages 28–28. IEEE, 2017.

Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. Diannao: a small-footprint high-throughput accelerator for ubiquitous machine-learning. In Proceedings of the 19th international conference on Architectural support for programming languages and operating systems, pages 269–284. ACM, 2014.

Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014.

Downloads

Published

2023-11-30

How to Cite

Siswanto Siswanto, Maya Utami Dewi, Siti Kholifah, Greget Widhiati, & Widya Aryani. (2023). Penggunaan Model Deep Learning Untuk Meningkatkan Efisiensi Dalam Aplikasi Machine Learning. JURNAL PENELITIAN SISTEM INFORMASI (JPSI), 1(4), 215–238. https://doi.org/10.54066/jpsi.v1i4.1619