Document detail
ID

oai:arXiv.org:2410.13803

Topic
Computer Science - Artificial Inte...
Author
Apriceno, Gianluca Tamma, Valentina Bailoni, Tania de Berardinis, Jacopo Dragoni, Mauro
Category

Computer Science

Year

2024

listing date

10/23/2024

Keywords
knowledge graphs
Metrics

Abstract

The ability to reason with and integrate different sensory inputs is the foundation underpinning human intelligence and it is the reason for the growing interest in modelling multi-modal information within Knowledge Graphs.

Multi-Modal Knowledge Graphs extend traditional Knowledge Graphs by associating an entity with its possible modal representations, including text, images, audio, and videos, all of which are used to convey the semantics of the entity.

Despite the increasing attention that Multi-Modal Knowledge Graphs have received, there is a lack of consensus about the definitions and modelling of modalities, whose definition is often determined by application domains.

In this paper, we propose a novel ontology design pattern that captures the separation of concerns between an entity (and the information it conveys), whose semantics can have different manifestations across different media, and its realisation in terms of a physical information entity.

By introducing this abstract model, we aim to facilitate the harmonisation and integration of different existing multi-modal ontologies which is crucial for many intelligent applications across different domains spanning from medicine to digital humanities.

;Comment: 20 pages, 6 figures

Apriceno, Gianluca,Tamma, Valentina,Bailoni, Tania,de Berardinis, Jacopo,Dragoni, Mauro, 2024, A Pattern to Align Them All: Integrating Different Modalities to Define Multi-Modal Entities

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Multiplexed live-cell imaging for drug responses in patient-derived organoid models of cancer
cell organoid patient-derived kinetic system effects cancer pdo models