Optimasi YOLO12n untuk Deteksi Kendaraan pada Perangkat Edge melalui Ablasi Batch size dan Resolusi Input

Authors

  • Yusuf Anshori Untiversitas Tadulako
  • Muhammad Yazdi
  • Yuri Yudhaswana Joefrie

DOI:

https://doi.org/10.30606/rjti.v5i1.4461

Keywords:

Yolo12n, Deteksi Kendaraan, Batch Size, Resolusi Input, Edge Computing.

Abstract

Deteksi kendaraan real-time pada perangkat edge merupakan komponen penting dalam Sistem Transportasi Cerdas (ITS). Performa deteksi di perangkat edge sangat dipengaruhi oleh konfigurasi pelatihan seperti batch size dan resolusi input, namun pengaruh keduanya terhadap YOLO12n untuk klasifikasi kendaraan berbasis PKJI belum banyak dikaji secara sistematis. Penelitian ini mengevaluasi 16 konfigurasi pelatihan yang berasal dari kombinasi empat batch size (8, 16, 32, dan 64) dan empat resolusi input (320, 480, 640, dan 720 piksel) untuk deteksi kendaraan pada NVIDIA Jetson Orin Nano. Model yang dihasilkan kemudian diuji menggunakan dataset 9.381 citra pada tiga format inferensi, yaitu PyTorch, TensorRT FP32, dan TensorRT FP16. Hasil menunjukkan bahwa interaksi batch size dan resolusi input bersifat non-linier. Tiga konfigurasi mengalami ketidakstabilan selama pelatihan, sedangkan resolusi tinggi tidak menjamin performa real-time yang lebih baik pada perangkat edge. Konfigurasi terbaik dalam eksperimen ini adalah TensorRT FP16 pada resolusi 480 piksel (FPS = 35,01, mAP@50 = 0,9109) dengan konsumsi memori GPU setengah dari konsumsi format FP32. Temuan ini menunjukkan bahwa konfigurasi pelatihan dan strategi optimasi inferensi berperan penting dalam menentukan kelayakan deployment deteksi kendaraan pada perangkat edge.

Downloads

Download data is not yet available.

Author Biographies

Muhammad Yazdi

Jurusan Teknologi Informasi, Universitas Tadulako, Indonesia

Yuri Yudhaswana Joefrie

Jurusan Teknologi Informasi, Universitas Tadulako, Indonesia

References

M. Chaman, A. El Maliki, H. Dahou, and A. Hadjoudja, “Benchmarking YOLO-based deep learning models for real-time object detection in hybrid ADAS and intelligent transportation systems,” Results in Engineering, vol. 29, no. 16, p. 108942, Mar. 2026, doi: 10.1016/j.rineng.2025.108942.

M. A. Berwo et al., “Deep Learning Techniques for Vehicle Detection and Classification from Images/Videos: A Survey,” Sensors 2023, Vol. 23, vol. 23, no. 10, May 2023, doi: 10.3390/s23104832.

J. L. Mela and C. G. Sánchez, “Yolo-based power-efficient object detection on edge devices for USVs,” Journal of Real-time Image Processing 2025 22:3, vol. 22, no. 3, pp. 108-, May 2025, doi: 10.1007/s11554-025-01682-2.

H. Feng, G. Mu, S. Zhong, P. Zhang, and T. Yuan, “Benchmark Analysis of YOLO Performance on Edge Intelligence Devices,” Cryptography 2022, Vol. 6, vol. 6, no. 2, Apr. 2022, doi: 10.3390/cryptography6020016.

F. Z. Guerrouj, S. R. Florez, A. El Ouardi, M. Abouzahir, and M. Ramzi, “Quantized Object Detection for Real-time Inference on Embedded GPU Architectures,” International Journal of Advanced Computer Science and Applications, vol. 16, no. 5, p. 2025, 2025, doi: 10.14569/IJACSA.2025.0160503.

O. G. Ajayi, P. O. Ibrahim, and O. S. Adegboyega, “Effect of Hyperparameter Tuning on the Performance of YOLOv8 for Multi Crop Classification on UAV Images,” Applied Sciences 2024, Vol. 14, vol. 14, no. 13, Jun. 2024, doi: 10.3390/app14135708.

P. Mittal, “A comprehensive survey of deep learning-based lightweight object detection models for edge devices,” Artificial Intelligence Review 2024 57:9, vol. 57, no. 9, pp. 242-, Aug. 2024, doi: 10.1007/s10462-024-10877-1.

Y. Tian, Q. Ye, and D. Doermann, “YOLOv12: Attention-Centric Real-time Object Detectors,” Feb. 2025, doi: 10.48550/arXiv.2502.12524.

Q. Chen, “Traffic Object Detection Using YOLOv12,” Open Access Library Journal, vol. 12, no. 8, pp. 1–15, Aug. 2025, doi: 10.4236/oalib.1113991.

T. Ge, B. Ning, and Y. Xie, “YOLO-AFR: An Improved YOLOv12-Based Model for Accurate and Real-time Dangerous Driving Behavior Detection,” Applied Sciences 2025, Vol. 15, vol. 15, no. 11, May 2025, doi: 10.3390/app15116090.

X. Li, L. Chen, T. Huang, A. Yang, and W. Liu, “YOLO-edge: real-time vehicle detection for edge devices,” Cluster Computing 2024 28:5, vol. 28, no. 5, pp. 289-, Apr. 2025, doi: 10.1007/s10586-024-04963-w.

M. He, C. Li, J. Yang, and X. Ning, “Real-time vehicle detection methods based on an improved YOLO-Lite approach in edge computing scenarios,” Discover Artificial Intelligence 2026, Feb. 2026, doi: 10.1007/s44163-026-00885-1.

M. M. Hasan et al., “BDNet: A Lightweight YOLOv12-Based Vehicle Detection Framework for Smart Urban Traffic Monitoring,” Smart Cities 2026, Vol. 9, vol. 9, no. 2, Feb. 2026, doi: 10.3390/smartcities9020033.

Z. Song, X. Zhang, and P. Tan, “YOLO-Fast: a lightweight object detection model for edge devices,” The Journal of Supercomputing 2025 81:5, vol. 81, no. 5, pp. 724-, Apr. 2025, doi: 10.1007/s11227-025-07172-3.

Y. Li et al., “YOLO-EDGE: an object detection algorithm for traffic scenarios,” The Journal of Supercomputing 2025 81:7, vol. 81, no. 7, pp. 840-, May 2025, doi: 10.1007/s11227-025-07275-x.

A. A. Murat and M. S. Kiran, “A comprehensive review on YOLO versions for object detection,” Engineering Science and Technology, an International Journal, vol. 70, p. 102161, Oct. 2025, doi: 10.1016/j.jestch.2025.102161.

Downloads

Published

2026-03-31

How to Cite

[1]
Yusuf Anshori, Muhammad Yazdi, and Yuri Yudhaswana Joefrie, “Optimasi YOLO12n untuk Deteksi Kendaraan pada Perangkat Edge melalui Ablasi Batch size dan Resolusi Input”, RJTI, vol. 5, no. 1, pp. 109–119, Mar. 2026.

Issue

Section

Articles

Similar Articles

You may also start an advanced similarity search for this article.