VERSES Publishes Breakthrough Research on Explainable Artificial Intelligence (XAI)

VANCOUVER, British Columbia, June 08, 2023 (GLOBE NEWSWIRE) -- VERSES AI Inc. (NEO:VERS) (OTCQX:VRSSF) ("VERSES'' or the "Company”), a cognitive computing company specializing in the next generation of artificial intelligence, announces the publication of a landmark research paper, "Designing Explainable Artificial Intelligence with Active Inference: A framework for interpretability based on the study of introspection and decision-making." The paper articulates methods for developing human-interpretable, explainable artificial intelligence (XAI) systems based on active inference and the Free Energy Principle, which offers new possibilities for transparency and understanding of AI processing.

VERSES’ research has emerged at a pivotal moment for the AI Industry, coinciding with the recent calls for greater AI explainability by the G7 Digital Ministers and new legislative proposal by the EU in the AI Act that targets Large Language Models (LLMS), like those employed by prominent organizations such as OpenAI, Google, Microsoft, and Meta for their lack of explainability. The study could have far-reaching implications for how future AI systems are designed, implemented to be more easily understood and regulated.

"Our research demonstrates the exciting potential of active inference for designing AI systems that are both capable of making complex decisions and explaining their reasoning in a way that humans can understand. This represents a significant step forward in building trust and accountability in AI,” said Mahault Albarracin, Director of Product at VERSES and lead author of the research paper.

The research proposes a unique AI architecture based on the active inference framework and the Free Energy Principle. These scientific principles can be used to create an AI that can explain its decision-making process in human-understandable terms - a significant advancement in the era of 'Explainable AI.'

A collaboration between researchers from VERSES, the Wellcome Centre for Human Neuroimaging at University College London, the Departments of Cognitive Computing and Philosophy at the Université du Québec à Montréal, and the Berlin School of Mind & Brain at Humboldt-Universität zu Berlin, the paper provides a compelling overview of active inference for modeling decision-making with human-like introspection.

The authors contend that their proposed architecture will enable AI systems to track and explain factors contributing to their decisions that can be further scaled up utilizing open standards for knowledge modeling like those being developed by the IEEE Spatial Web Working Group and are intended to be demonstrable in VERSES KOSM OS and GIA products scheduled for release later this year . This new approach to AI transparency aligns with growing demands from regulators, policymakers, and human interest groups for AI systems to be more interpretable and auditable by users.

"The ability for AI to explain its decision-making process is crucial for building trust and understanding amongst end users," said Maxwell Ramstead, VERSES Director of Research. "Our proposed framework takes an important step in this direction, potentially revolutionizing how we view and interact with AI."

In response to growing global concerns about the risks and safety around artificial intelligence, this breakthrough research paper is set to be showcased at the upcoming Active Inference conference in Belgium next month.

About VERSES

VERSES is a cognitive computing company specializing in next-generation Artificial Intelligence. Modeled after natural systems and the design principles of the human brain and the human experience, VERSES flagship offering, GIA™, is an Intelligent Agent for anyone powered by KOSM™, a network operating system enabling distributed intelligence. Built on open standards, KOSM transforms disparate data into knowledge models that foster trustworthy collaboration between humans, machines and AI, across digital and physical domains. Imagine a smarter world that elevates human potential through innovations inspired by nature. Learn more at VERSESLinkedIn and Twitter.

On Behalf of the Company

Gabriel René
VERSES Technologies Inc.
Co-Founder & CEO
press@verses.ai

Media and Investor Relations Inquiries

Leo Karabelas
Focus Communications
President
info@fcir.ca

The NEO has not reviewed or approved this press release for the adequacy or accuracy of its contents.

Forward-Looking Statements Cautionary Note

The NEO has not reviewed or approved this press release for the adequacy or accuracy of its contents.

This release includes certain statements and information that may constitute forward-looking information within the meaning of applicable Canadian securities laws. Forward-looking statements relate to future events or future performance and reflect the expectations or beliefs of management of the Company regarding future events. Generally, forward-looking statements and information can be identified by the use of forward-looking terminology such as “intends” or “anticipates”, or variations of such words and phrases or statements that certain actions, events or results “may”, “could”, “should”, “would” or “occur”. This information and these statements, referred to herein as "forward-looking statements", are not historical facts, are made as of the date of this news release and include without limitation, statements relating to the impact of active inference and related research on future AI development and explainable AI models, the impact of active inference research on further advancements in AI and the release date of the whitepaper. In making the forward-looking statements in this news release, the Company has applied several material assumptions, including without limitation, that active Inference will play a significant role in the development of explainable AI models and other milestones In AI and that the Company will be able to finalize and release the whitepaper on Its expected timeline.

These forward-looking statements involve numerous risks and uncertainties, and actual results might differ materially from results suggested in any forward-looking statements. These risks and uncertainties include, among other things, that active inference will not be widely applied in developing explainable AI models, that active inference will not have application to other AI milestones and that the Company will not be able to release the whitepaper on its expected timeline. Although management of the Company has attempted to identify important factors that could cause actual results to differ materially from those contained in forward-looking statements or forward-looking information, there may be other factors that cause results not to be as anticipated, estimated or intended. There can be no assurance that such statements will prove to be accurate, as actual results and future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward-looking statements and forward-looking information. Readers are cautioned that reliance on such information may not be appropriate for other purposes. The Company does not undertake to update any forward-looking statement, forward-looking information or financial out-look that are incorporated by reference herein, except in accordance with applicable securities laws.


Primary Logo

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.