Prerequisites of in-Context Learning for Transformers on Queries

Bulat Shkanov, Mikhail Alexandrov

Abstract


Pre-trained generative transformers (GPT) with their 200+ billion parameters have already demonstrated their ability to successfully solve a wide range of text related problems without the need for additional taskspecific training. However, it has been observed that solution quality can be significantly improved for certain queries that reflect task formulation and conditions. It indicates that the transformer is further trained based on the query context, and the aim of this study is to show why GPT transformers enable to do it. To this end, the article jointly considers: elements of transformer architecture (data compressors and sentiment neurons), elements of the user interface with transformers (zero-shot and few-shot prompts), and text processing procedures (arithmetic coding and minimum description length). The authors attempt to provide a theoretical justification for the convergence of the sequential fine-tuning process using Hoeffding's inequality. The study presents experimental results demonstrating GPT transformers' capabilities for in-context learning. This confirms their potential for further development in natural language processing technologies.

Keywords


Data compressors, sentiment neurons, in-context learning, zero-shot learning, few-shot learning

Full Text: PDF