A cognitive system is an intelligent and adaptive system. It can learn from its environment, be it virtual or in the physical world, and it uses artificial intelligence to choose the best actions to reach its objectives. The aim is to design and teach a cognitive system to adopt and achieve its owner’s objectives.
This is the website of Gavin B. Rens.
The website provides details of my research into Cognitive Systems, a field within Artificial Intelligence.
More specifically, my field of research can be summarized as Autonomous Decision-making under Uncertainty. My particular interests are
- knowledge representation,
- probabilistic belief change,
- planning under uncertainty and
- agent architectures.
My doctoral studies fall mostly under the category of knowledge representation. My supervisors and I developed the Stochastic Decision Logic (SDL) (Rens et al., 2015). SDL is a logic for specifying partially observable Markov decision process (POMDP) models and for reasoning about such models, even if the models are partial or incomplete. Entailment of arbitrary queries about sequences of actions and observations can be answered by employing the logic. A sound, complete and terminating decision procedure is provided.
After completing my doctorate, I started working on probabilistic belief change, both revision and update. That is, the research involved revision of beliefs due to receiving correct information at odds with an agent’s current beliefs (Rens et al., 2016, 2018), and update of beliefs due to a changing environment (Rens and Meyer, 2018). My work on revision involved the method called Lewis imaging, which deals with the zero priors problem that Bayesian conditioning cannot deal with. My work on update involves generalizing the state estimation function of POMDPs to deal with exogenous events.
I have not focused on planning under uncertainty to the extent of knowledge representation and probabilistic belief change, but it has formed part of my research since I started my academic career. For instance, as part of my Masters degree, I developed a POMDP planner by extending DTGolog (Rens et al., 2008; Rens, 2010). And in 2017 my postdoc supervisor and I published an article reporting on an agent architecture involving POMDP planning (Rens and Moodley, 2017).
The journal article mentioned above (Rens and Moodley, 2017) is the culmination of research into combining the belief-desire-intention (BDI) framework with the POMDP formalism. The Hybrid POMDP-BDI Agent Architecture (Rens and Moodley, 2017) recommends actions in real-time (online), builds up a library of policies generated (to reuse later), and manages multiple goals in a sophisticated manner. I have also published two workshop papers on knowledge management frameworks explicitly involving probabilistic belief change and implicitly assuming the presence of a planning module (Rens, 2016; Rens et al., 2017).
I shall continue developing my understanding of and contributing to the state-of-the-art of effective knowledge representation methods, probabilistic belief change techniques, and online planning under uncertainty. And, where possible, I shall combine findings in these areas into systems for autonomous decision-making and reasoning about the effects of actions and the epistemic consequences of sensor inputs.
Finally, I have a continued interest in Reinforcement Learning and would like to incorporate Reinforcement Learning techniques into learning of symbolic knowledge (e.g., as in Haeming and Peters (2013)).
One full paper was accepted and and a shorter paper was also accepted that will be presented as a poster at the German conference on AI. The former is about investigations into generalizing approaches to probabilistic belief revision, going from Bayesian conditioning to Lewis imaging. The latter is about how to formally describe how agents …
I got the opportunity to share my knowledge with postgrad students and staff of the Faculty of Computer Science at the University of Ljubljana, Slovenia. Over two weeks, I taught two two-hour lessons on Probabilistic Belief Change, and one three-hour crash-course on Partially Observable Markov Decision Processes. The nature in Slovenia is wonderful. I walked …
I was invited to Macquarie University in Sydney, Australia to collaborate with Abhaya Nayak on probabilistic belief revision, and trust between agents. The visit was for six weeks, ending early in December 2017. My wife accompanied me. We stayed at the student hostel across the road. Sydney is nice. One can travel for a flat …