Confirmed Speakers for BICA 2025

Alexei Samsonovich

Alexei V. Samsonovich is a Professor in the Cybernetics Department and the head of BICA Lab in the Institute of Cyber Intelligence Systems at the National Research Nuclear University “MEPhI” in Moscow, Russia, an Affiliate Faculty in the Department of Bioengineering at George Mason University (Fairfax, VA, USA). He holds a Ph.D. in Applied Mathematics from the University of Arizona (1997), where he co-developed a continuous-attractor theory of hippocampal spatial maps (with Prof. B.L. McNaughton) and a mental state framework for cognitive modeling (with Prof. L. Nadel). Since 2000 Dr. Samsonovich worked in the Krasnow Institute for Advanced Study at George Mason University, where his highest rank was Assistant Professor. Since 2005 his research focused on biologically inspired cognitive architectures (BICA), since 2012 – on social-emotional BICA (eBICA). Dr. Samsonovich is the founding chair of BICA Conference Series (2010-2025), founding president of BICA Society, founding Editor-in-Chief of the journal BICA and BICA*AI section of Cognitive Systems Research, recipient of many grant awards. Published over 150 WoS/Scopus-indexed research papers, Scopus H=25. His publications deserved journal cover illustrations in Learning & Memory, Journal of Neuroscience, Hippocampus, Cortex, and Complexity.

Speech: How to teach BICA speak natural language and understand human mentality

In the age of LLM, can we still use symbolic AI, like BICA? Indeed, LLMs cannot do everything for us. BICAs are unique in their potential ability to replicate higher human cognitive functions – volition, personality, social-emotional intelligence, goal reasoning – but require intelligent interfaces and symbol grounding to interact with a real-life social environment. LLMs can do just that: bridge internal BICA representations and real-world modalities. The idea of this sort of hybridization involves two steps: (1) define an internal representation system for BICA, and (2) train and instruct LLM to translate between these representations and communicative acts. To be compatible with the human mentality, BICA should compute dynamics of intentionalities, formalized in terms of moral schemas. These are special constructs defined within the eBICA framework. Their elements – intentions and intensions (different!) – can be recognized and expressed in speech using LLMs. Constructing a mathematical model of the semantic space of intensions is one challenge. Then one also needs to connect intensions to moral schemas (using linear algebra) and to natural language (using LLMs). The result will be a new technology, enabling the development of a next-generation AI applicable to a broad spectrum of domains: from intelligent tutoring to psychological consultants, from entertainment to serious games.

Ricardo Gudwin

Prof. Ricardo Gudwin is actually an Associate Professor at the Faculty of Electrical and Computer Engineering, State University of Campinas - Brazil. He received the B.S. degree in Electrical Engineering in 1989, the M.S. degree in Electrical Engineering in 1992, and the Ph.D. in Electrical Engineering in 1996, all of them from the Faculty of Electrical and Computer Engineering, State University of Campinas - Brazil. His earlier research interests include fuzzy systems, neural networks, and evolutionary systems. His current research interests include the study of intelligence and intelligent systems, intelligent agents, semiotics, computational semiotics, and artificial cognition. Prof. Gudwin is the head of the "Computational Semiotics Group", and Scientific Member/Director of the Group for Research on Artificial Cognition within the DCA/FEEC/UNICAMP, a member of the board of governors of the SEE - Semiotics-Evolution-Energy Virtual Institute in Toronto, Canada, and a member of the editorial board of the "On Line Journal for Semiotics, Evolution, Energy Development" - ISSN 1492-3157, published by the SEE Virtual Institute. He was the editor-in-chief of the journal "Controle & Automação", published by the SBA - Brazilian Society for Automation from Sept. 2004 to Dec. 2008. He is actually a co-PI in the CEPID BRAINN (FAPESP Proc. 2013/07559-3), responsible for the research in the field of cognitive architectures.

Speech: On the Pursuit of Understanding and Consciousness in Cognitive Architectures

Human beings not only entertain a conversation with other beings, like an LLM does, but based on their sensory organs, are able to make sense of their surrounding environment, having feelings and experiences while interacting with it. They are able to truly understand the meaning of exchanged communications and evolve high-level thinking before acting in the world. In this talk, we investigate the role of Understanding, what it is, according to different thinkers, and how this capability might be made available in a cognitive architecture. We discuss whether LLMs (and Transformers, in a general sense) can be afforded to exhibit true understanding, or if they just simulate this capability, acting like “probabilistic parrots”, without a true understanding of what they say. Finally, we propose an ontology of reality to support the development of Cognitive Architectures, which we believe might enable future artificial agents to attain a human-comparable understanding of reality, and possibly consciousness.

Ron Sun

Ron Sun is a cognitive scientist who made significant contributions to computational psychology and other areas of cognitive science and artificial intelligence. He is currently Professor of Cognitive Science at Rensselaer Polytechnic Institute, and was formerly James C. Dowell Professor of Engineering and Professor of Computer Science at the University of Missouri. He received his Ph.D. in 1992 from Brandeis University.

Speech: Rethinking Rationality and Intelligence in AI Through a Cognitive Architecture

This talk examines the literature on rationality and intelligence in AI systems, and delves into a specific approach — the development of a neural-symbolic cognitive architecture. The discussion covers various forms of rationality, different ideas about intelligence, nature of human activities, roles of motivation, and so on, all examined through the lens of the cognitive architecture. This talk argues that recent computational models are more sophisticated than often assumed: They are well-equipped to overcome many of the criticisms leveled against AI.

Sean Kugele

Dr. Kugele's research focuses on artificial intelligence, cognitive modeling, and neuro-symbolic systems. The goal of his research is to understand how natural minds (such as human minds) work and to implement biologically inspired software systems based on the same principles. Dr. Kugele has worked for over a decade as a software engineer and software architect. He has undergraduate degrees in computer science, mathematics, and anthropology, and a PhD in computer science from the University of Memphis.

Speech: BICAs for Science: Mental Imagery, Consciousness and the Nature of Thought

Software agents based on biologically inspired cognitive architectures (BICAs) can serve as powerful tools for cognitive science. They offer controllable, inspectable platforms for probing complex psychological questions and testing alternative theoretical hypotheses. But while this synthetic approach to understanding minds holds tremendous promise, it also introduces unique methodological challenges. In this talk, I will explore these opportunities and challenges in the context of my recent attempts to use a BICA to investigate mental imagery and aphantasia.
 
Mental imagery (e.g., seeing with your “mind’s eye” or hearing your inner voice) is a capability most people take for granted. Yet 2–5% of the population report a complete absence of mental imagery in one or more sensory modalities. Surprisingly, individuals with aphantasia (aphantasics) often perform well on standard mental imagery tasks and can lead completely normal lives. This presents a modeling challenge: how can we reconcile the differences in aphantasics’ subjective experiences with their unaffected task performance? Fully addressing this question requires modeling both conscious and unconscious mental representations and processes. And the answer to this question has bearing on several longstanding debates, including the functional import of mental imagery, the nature of internal representations and processes, and the role of consciousness in producing overt behaviors.

Vassilis G. Kaburlasos

Vassilis G. Kaburlasos has received the Diploma degree from the National Technical University of Athens, Greece, in 1986, and the M.Sc. and Ph.D. degrees from the University of Nevada, Reno, NV, USA, in 1989 and 1992, respectively, all in electrical engineering. He currently serves as a Tenured Full Professor in the Department of Informatics, Computer and Telecommunication Engineering (at Serres) of the International Hellenic University (IHU), Greece. During 2019-2024 he served as an elected member of IHU’s Research Committee. He has been the founder and director during 2016-2023 of the HUman-MAchines INteraction (HUMAIN) research Lab at the Department of Computer Science of IHU in Kavala having accessed projects of total budget over 5M EUR. He has been participant or (principal) investigator in 32 research projects, funded either publicly or privately, in the USA and in the European Union. He has been a member of the technical/advisory committee or an invited speaker in numerous international conferences and a reviewer of more than 60 indexed (WoS) journals. He has (co)authored more than 230 scientific research articles in indexed journals, refereed conferences, edited volumes and books. He is the co-owner of 2 patents in Greece and another 3 in Europe. His research interests include modeling of cyber-physical systems, including intelligent robots, with breakthrough contributions in the “Lattice Computing (LC) information processing paradigm” toward computing with semantics. Dr. Kaburlasos is a member of several professional, scientific, and honor societies around the world including the Sigma Xi, Phi Kappa Phi, Tau Beta Pi, Eta Kappa Nu, and the Technical Chamber of Greece. Since 2019, his name is included in the top 2% of “career long” researchers worldwide in the field “Artificial Intelligence & Image Processing” according to Mendeley Data, http://doi.org/10.17632/btchxktzyw.2 /3 /4 /6 /7 . Since February 2024, he is a member of the IEEE P3430 Working Group on “A Holistic Framework for AI Foundation Models”.

Speech: Human Friendly Artificial Intelligence enabled by the Lattice Computing Paradigm

A conventional deep-learning architecture implements a (statistical) model in the Hilbert space R^N, where R is the set of real numbers – Conventional modeling has been initiated by Isaac Newton regarding the physical world. However, when humans are involved then non-numerical data emerge, e.g. logical propositions /symbols /disparate data hierarchies. Starting with “Industry 3.0”, there has been an increasing demand for models that involve non-numerical data per se and, historically, the truth values of propositions were among the first ones studied resulting in Boolean algebra /logic. In turn, the study of Boolean algebra has resulted in the introduction of mathematical Lattice Theory (LT) or, Order Theory. Lately, the Lattice Computing (LC) paradigm has been introduced as a modeling paradigm shift to a lattice data domain, including R^N, where partial order represents semantics. A number of popular mathematical tools will be presented with emphasis to logic. Of special interest are information granules, namely Intervals’ Numbers (INs), whose collection is a mathematical lattice denoted by F. An IN may represent either a fuzzy number or a probability distribution or a real number. It turns out that F is a convex cone in the Hilbert space G of Generalized Intervals’ Numbers (GINs). In conclusion, an enhancement of deep-learning transformers is proposed from R^N to F^N. A theoretical advantage of the proposed enhancement is the potential to increase the cardinality of implementable models, infinite times. A potential practical advantage is the reduction of energy consumption by architectures that implement fewer models of greater flexibility i.e. with a tunable number of tunable parameters.