A SYSTEMATIC REVIEW OF PARAMETER-EFFICIENT FINE-TUNING (PEFT) IN SPEECH PROCESSING

Authors

  • Noor Ul Ain Liaqat Department of Computer Science, University of Management and Technology, Sialkot, Pakistan
  • Mutaher Ijaz Department of Computer Science, Sir Syed CASE Institute of Technology, Islamabad, Pakistan
  • Umair Muneer Butt Department of Computer Science, University of Management and Technology, Sialkot, Pakistan
  • Imtiaz Hussain Department of Computer Science, University of Management and Technology, Sialkot, Pakistan

DOI:

https://doi.org/10.63878/cjssr.v3i4.1707

Abstract

Recent breakthroughs in large-scale speech models such as Whisper, Wav2Vec 2.0, and HuBERT have greatly enhanced speech processing tasks. Full fine-tuning comes at a prohibitive cost, though, which restricts their application to low- resource or real-time settings. The parameter-efficient fine-tuning (PEFT) approaches—e.g., LoRA, QLoRA, adapters, and prompt tuning—allow for compact adaptation by fine-tuning only a small subset of parameters. We review 33 studies (2021–2025) using PEFT for applications such as ASR, speaker verification, and emotion recognition. We organize methods by task, compare efficiency and accuracy, and determine prominent trends. Results indicate PEFT produces competitive results with reduced cost, enabling scalable deployment in resource-poor environments.

Downloads

Download data is not yet available.

Downloads

Published

2025-12-26

How to Cite

A SYSTEMATIC REVIEW OF PARAMETER-EFFICIENT FINE-TUNING (PEFT) IN SPEECH PROCESSING. (2025). Contemporary Journal of Social Science Review, 3(4), 1503-1512. https://doi.org/10.63878/cjssr.v3i4.1707