LED-Net: A Lightweight Edge Detection Network


Journal article


Shucheng Ji, Xiaochen Yuan, Junqi Bao, Tong Liu
Pattern Recognition Letters, vol. 187, 2025 Jan, pp. 56--62


Link
Cite

Cite

APA   Click to copy
Ji, S., Yuan, X., Bao, J., & Liu, T. (2025). LED-Net: A Lightweight Edge Detection Network. Pattern Recognition Letters, 187, 56–62. https://doi.org/10.1016/j.patrec.2024.11.006


Chicago/Turabian   Click to copy
Ji, Shucheng, Xiaochen Yuan, Junqi Bao, and Tong Liu. “LED-Net: A Lightweight Edge Detection Network.” Pattern Recognition Letters 187 (January 2025): 56–62.


MLA   Click to copy
Ji, Shucheng, et al. “LED-Net: A Lightweight Edge Detection Network.” Pattern Recognition Letters, vol. 187, Jan. 2025, pp. 56–62, doi:10.1016/j.patrec.2024.11.006.


BibTeX   Click to copy

@article{ji2025a,
  title = {LED-Net: A Lightweight Edge Detection Network},
  year = {2025},
  month = jan,
  journal = {Pattern Recognition Letters},
  pages = {56--62},
  volume = {187},
  doi = {10.1016/j.patrec.2024.11.006},
  author = {Ji, Shucheng and Yuan, Xiaochen and Bao, Junqi and Liu, Tong},
  month_numeric = {1}
}


⭐️ Highlight

  • LED-Net improves the edge detection performance while reducing the computational complexity. 
  • Positional information has been introduced into the edge feature extraction process. 
  • Feature sample process becomes efficient and accurate. 
  • Edge prediction maps become thinner by the employed Unified Loss Function. 

 

Abstract: As a fundamental task in computer vision, edge detection is becoming increasingly vital in many fields. Recently, large-parameter pre-training models have been used in edge detection tasks. However, significant computational resources are required. This paper presents a Lightweight Edge Detection Network (LED-Net) with only 50K parameters. It mainly consists of three blocks: Coordinate Depthwise Separable Convolution Block (CDSCB), Sample Depthwise Separable Convolution Block (SDSCB), and Feature Fusion Block (FFB). The CDSCB extracts multi-scale features with positional information, thus reducing the time complexity while guaranteeing the performance. Furthermore, SDSCB is adopted to rescale the multi-scale features to a unified resolution efficiently. To obtain refined edge lines, the FFB is adopted to aggregate the features. In addition, a unified loss function is proposed to achieve a thinner edge prediction. By training on the BIPED dataset and evaluating on the UDED dataset, results show that the proposed LED-Net achieves superior performance in both ODS (0.839), OIS (0.855), and AP (0.830). 


Tools
Translate to