Conflictual cues and unexpected changes in human real-case scenarios may be detrimental to the execution of tasks by artificial agents, thus affecting their performance. Meta-learning applied to reinforcement learning may enhance the design of control algorithms, where an outer learning system progressively adjusts the operation of an inner learning system, leading to practical benefits for the learning schema. Here, we developed a brain-inspired meta-learning framework for inhibition cognitive control that i) exploits the meta-learning principles in the neuromodulation theory proposed by Doya, ii) relies on a well-established neural architecture that contains distributed learning systems in the human brain, and iii) proposes optimization rules of meta-learning hyperparameters that mimic the dynamics of the major neurotransmitters in the brain. We tested an artificial agent in inhibiting the action command in two well-known tasks described in the literature: NoGo and Stop-Signal Paradigms. After a short learning phase, the artificial agent learned to react to the hold signal, and hence to successfully inhibit the motor command in both tasks, via the continuous adjustment of the learning hyperparameters. We found a significant increase in global accuracy, right inhibition, and a reduction in the latency time required to cancel the action process, i.e., the Stop-signal reaction time. We also performed a sensitivity analysis to evaluate the behavioral effects of the meta-parameters, focusing on the serotoninergic modulation of the dopamine release. We demonstrated that brain-inspired principles can be integrated into artificial agents to achieve more flexible behavior when conflictual inhibitory signals are present in the environment.
Brain-inspired meta-reinforcement learning cognitive control in conflictual inhibition decision-making task for artificial agents
Robertazzi F.;Vissani M.;Schillaci G.;Falotico E.
2022-01-01
Abstract
Conflictual cues and unexpected changes in human real-case scenarios may be detrimental to the execution of tasks by artificial agents, thus affecting their performance. Meta-learning applied to reinforcement learning may enhance the design of control algorithms, where an outer learning system progressively adjusts the operation of an inner learning system, leading to practical benefits for the learning schema. Here, we developed a brain-inspired meta-learning framework for inhibition cognitive control that i) exploits the meta-learning principles in the neuromodulation theory proposed by Doya, ii) relies on a well-established neural architecture that contains distributed learning systems in the human brain, and iii) proposes optimization rules of meta-learning hyperparameters that mimic the dynamics of the major neurotransmitters in the brain. We tested an artificial agent in inhibiting the action command in two well-known tasks described in the literature: NoGo and Stop-Signal Paradigms. After a short learning phase, the artificial agent learned to react to the hold signal, and hence to successfully inhibit the motor command in both tasks, via the continuous adjustment of the learning hyperparameters. We found a significant increase in global accuracy, right inhibition, and a reduction in the latency time required to cancel the action process, i.e., the Stop-signal reaction time. We also performed a sensitivity analysis to evaluate the behavioral effects of the meta-parameters, focusing on the serotoninergic modulation of the dopamine release. We demonstrated that brain-inspired principles can be integrated into artificial agents to achieve more flexible behavior when conflictual inhibitory signals are present in the environment.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.