Staffordshire University logo
STORE - Staffordshire Online Repository

RLAS-BIABC: A Reinforcement Learning-based Answer Selection using the BERT Model Boosted by an Improved ABC Algorithm

Gharagozlou, Hamid, Mohammadzadeh, Javad, Bastanfard, Azam and SHIRY GHIDARY, Saeed (2022) RLAS-BIABC: A Reinforcement Learning-based Answer Selection using the BERT Model Boosted by an Improved ABC Algorithm. Computational Intelligence and Neuroscience (CIN). ISSN 1687-5265 (In Press)

[img] Text
A Reinforcement Learning-Based Answer Selection.pdf - AUTHOR'S ACCEPTED Version (default)
Restricted to Repository staff only
Available under License Creative Commons Attribution 4.0 International (CC BY 4.0) .

Download (1MB) | Request a copy

Abstract or description

Answer selection (AS) is a critical subtask of the open domain question answering (QA) problem. The present paper proposes a method called RLAS-BIABC for AS, which is established on attention mechanism-based Long Short-Term Memory (LSTM) and the Bidirectional Encoder Representations from Transformers (BERT) word embedding, enriched by an improved artificial bee colony algorithm (ABC) algorithm for pre-training and a reinforcement learning-based algorithm for training back-propagation (BP) algorithm. BERT can be comprised in a downstream work and obtain fine-tuned as a united task-specific architecture, and the pre-trained BERT model can grab different linguistic effects. Existing algorithms typically train the AS model with positive-negative pairs for a two-class classifier. A positive pair contains a question and a genuine answer, while a negative one includes a question and a fake answer. The output should be one for positive and zero for negative pairs. Typically, negative pairs are more than positive, leading to an imbalanced classification that drastically reduces system performance. To deal with it, we define classification as a sequential decision-making process in which the agent takes a sample at each step and classifies it. For each classification operation, the agent receives a reward, in which the prize of the majority class is less than the reward of the minority class. Ultimately, the agent finds an optimal value for the policy weights. We initialize the policy weights with the improved ABC algorithm. The initial value technique can prevent problems such as getting stuck in the local optimum. Although ABC serves well in most tasks, there is still a lack in the ABC algorithm that disregards the fitness of related pairs of individuals in discovering a neighboring food source position. Therefore, this paper also proposes a mutual learning technique that modifies the produced candidate food source with the higher fitness between two individuals selected by a mutual learning factor. We tested our model on three datasets, LegalQA, TrecQA, and WikiQA, and results show that RLAS-BIABC can be recognized as a state-of-the-art method.

Item Type: Article
Uncontrolled Keywords: Answer selection, LSTM, Attention mechanism, Imbalanced classification, Deep reinforcement learning, BERT, ABC algorithm.
Faculty: School of Digital, Technologies and Arts > Computer Science, AI and Robotics
Depositing User: Saeed SHIRY GHIDARY
Date Deposited: 26 Apr 2022 10:49
Last Modified: 02 May 2022 17:17
URI: https://eprints.staffs.ac.uk/id/eprint/7293

Actions (login required)

View Item View Item

DisabledGo Staffordshire University is a recognised   Investor in People. Sustain Staffs
Legal | Freedom of Information | Site Map | Job Vacancies
Staffordshire University, College Road, Stoke-on-Trent, Staffordshire ST4 2DE t: +44 (0)1782 294000