Neural Style Transfer

Main Article Content

Priyanshu Bilwane
Ranu Nanhe
Rohit Wachnekar
Sahil Rajankar
Pinky Gangwani

Abstract

Balancing fidelity and efficiency remains a core challenge in Neural Style Transfer (NST). Traditional optimization-based models generate visually rich stylizations but are computationally slow, whereas feed-forward networks achieve real-time inference at the cost of structure and detail. This paper presents a two-stage hybrid pipeline that bridges this gap. The first stage employs Adaptive Instance Normalization (AdaIN) for rapid global alignment of content and style statistics. The second stage introduces a lightweight VGG19-based refinement that optimizes a composite loss integrating Gram matrix texture matching, channel-wise mean–variance stability, and Sobel edge preservation. This combination enhances both structural integrity and color coherence in the final stylized outputs. Experiments on varied content–style pairs demonstrate significant visual improvements while maintaining computational efficiency. Furthermore, our findings expose the limitations of traditional quantitative metrics such as  Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), which fail to capture perceptual quality, highlighting the necessity of perceptual measures like LPIPS for evaluating generative visual tasks.

Downloads

Download data is not yet available.

Article Details

How to Cite
Bilwane, P., Nanhe, R., Wachnekar, R., Rajankar, S., & Gangwani, P. (2025). Neural Style Transfer. International Journal of Recent Advances in Engineering and Technology, 14(3s), 223–229. https://doi.org/10.65521/intjournalrecadvengtech.v14i3s.1695
Section
Articles

Similar Articles

<< < 10 11 12 13 14 15 16 17 18 19 > >> 

You may also start an advanced similarity search for this article.