AstraMT: Instruction-Tuned Few-Shot Assamese–English Translation with Context-Aware Prompting and Reranking
Abstract
Developing machine translation (MT) systems for low-resource languages such as Assamese remains challenging due to limited parallel corpora and morphological complexity. Recent instruction-tuned large language models (LLMs) offer few-shot translation capabilities, but static prompt-based methods often yield suboptimal performance in real world scenarios. This paper introduces AstraMT, a modular pipeline for Assamese–English few-shot translation using LLMs. AstraMT incorporates a context-aware prompt selector (CAPS), syntactic prompt templates, multi-output reranking based on BLEU and COMET scores, and a lightweight post-editing module that corrects named entity errors and auxiliary omissions. The framework was evaluated on two datasets: the FLORES-200 devtest set and a manually aligned subset of the Samanantar corpus. AstraMT achieved BLEU improvements of up to +3.2 and COMET gains of +0.07 over static few-shot prompting. The AstraMT-Mixtral variant reached a BLEU of 23.0 on FLORES-200 and 21.3 on Samanantar, outperforming the supervised IndicTrans2 baseline. Qualitative and error analyses further highlighted AstraMT’s ability to generate fluent and semantically accurate translations. These results demonstrate that AstraMT provides an effective and extensible framework for LLM based translation in low-resource settings and can generalize across different LLMs without additional fine-tuning.
Keywords
Context aware prompt selector, prompt constructor, LLM, mixtral