UNSW Canberra researchers are applying human psychological concepts to autonomous agents, which will allow them to adapt to complex and dynamic environments.

The team, which includes Dr Dilini Samarasinghe, Associate Professor Michael (Spike) Barlow and Dr Erandi Lakshika, have designed a novel artificial intelligence model that adopts the concept of ‘flow’ from psychology.

Flow refers to the mental state experienced by an individual when they are fully immersed in a task and find it intrinsically rewarding to engage with.

“Humans performing tasks while being in a flow-zone experience a sense of discovery, driving them with an intrinsic motivation for higher performance levels,” Dr Samarasinghe said.

“Drawing insights from this concept, we designed our model such that the agents are trained within an environment where finer distinctions are made to the challenges across training time to maintain them in a flow-zone.”

Dr Samarasinghe said autonomous agents should be able to interact with their environments and work towards a goal with minimal human guidance in the decision-making process.

To accomplish this, she said they should possess a pre-defined or dynamically acquired understanding of the goals and the environments they are operating in.

“The agents trained with our proposed technique have demonstrated to be more robust than those trained with traditional learning strategies as they can better perform in the random variations of the task environments they were presented with after training,” Dr Samarasinghe said. 

“This has major implications for the future in developing resilient intelligent agents and simulation technologies for modelling decision making and control strategies in defence-based environments, among other applications.”

Dr Samarasinghe will discuss the project at the Australian Defence Science, Technology and Research (ADSTAR) Summit in July.

She said autonomous agent models can be used to simulate complex defence-related environments, including military fields, battle strategies, and mission training.

However, adapting to changing and uncertain conditions is a known limitation of the existing artificial agent models.

“The work proposed in my presentation is investigating an AI technique that mitigates this issue,” Dr Samarasinghe said.

“It drives the agents with both external goals, as well as an intrinsic curiosity to fully explore the task space, building an awareness of the environment that they are performing in. This awareness helps agents identify more robust and generalisable solutions for problems in dynamic environments.”

“As such, these autonomous agents are capable of adjusting their behaviour to deal with unpredictable and varying conditions.”

The work opens up new application domains, as well as research avenues, for modelling complex decision-making agents within unpredictable real-world environments.

Dr Samarasinghe said she hopes her presentation will draw attention to the potential of autonomous agent systems when they are fueled by an intrinsic curiosity for discovery and skill improvement, in addition to a typical goal-oriented mind.

She said it also emphasises the significance of interdisciplinary ideas and collaborations in moving AI research forward.

“This work demonstrates that concepts and principles from cognitive sciences have huge implications in improving AI models that are designed for complex problem solving,” she said.

“I would like to thank A/Prof Michael (Spike) Barlow, and Dr Erandi Lakshika for their support and involvement in the project. These results would not have been achieved without their contribution.”