Human Agency in Live Subtitling through Respeaking
Towards a Taxonomy of Effective Editing
DOI:
https://doi.org/10.47476/jat.v7i2.2024.302Keywords:
respeaking, interlingual, intralingual, accuracy evaluation, effective editions (EEs), subtitlingAbstract
This paper examines the phenomenon of effective editions (EEs) as used by respeakers during live assignments. While the term "editing" conventionally refers to the refinement of written text, live spoken language editing has been recognised as a regular practice in the context of intra- and interlingual respeaking (Romero-Fresco and Pöchhacker 2017). However, the existing definition of EEs can benefit from expansion, and there are benefits to be reaped from a more comprehensive exploration of the range of phenomena encompassed under this umbrella term. This paper endeavours to fill the gap by scrutinising instances of EEs from an extensive database gathered within the framework of the ESRC-funded SMART project (ES/T002530/1, 2020-2023) on interlingual respeaking. Based on bottom-up, empirical analysis, we propose a straightforward taxonomy of EEs, consisting of the main categories of condensation, re-expression, and compensation. Our analysis reveals the pervasive nature of EEs, which also emerge as significant predictors of respeakers’ performance accuracy. The taxonomy we present is grounded in concrete examples and can facilitate a more equitable and pragmatic assessment of subtitle accuracy but also holds potential for refining (semi-)automated subtitle accuracy evaluation systems which are currently at prototypical stages (Bacigalupe and Romero-Fresco 2023). Furthermore, the proposed taxonomy is relevant for respeaker training and/or upskilling where proficiency in effective editing will lead to enhanced performance. Given the nascent status of research in this domain, the paper concludes by delineating prospective directions for further exploration of EEs.
Lay summary
This paper focuses on how respeakers positively change the spoken language they hear from the source during a live assignment. Respeakers are highly skilled language professionals who strategically reformulate what they hear from the speaker to speech recognition software, which turns their speech into text to be displayed as live subtitles with a few seconds’ delay. Although the word “editing” is normally associated with improving written text, in this case, we deal with effective editions (EEs) of spoken language, performed by respeakers “on the go”. EE has been referred to in previous studies, but it has not been studied in depth. We try to fill this gap by analysing cases of EE and offering a simple taxonomy of EEs. We find that EEs are a very frequent phenomenon that also relates to how well respeakers work. The taxonomy we propose is based on concrete examples and can be used to evaluate subtitle accuracy in a fairer and more pragmatic way. It can also help develop better (semi) automated quality evaluation systems for larger datasets. Another advantage of the taxonomy is its application in respeaker training and/or upskilling: knowing how to effectively edit can improve respeakers’ performance. As this topic is still understudied, we finish the paper by indicating a few possible research directions to further explore EEs.