Knowledge plays a fundamental role in AI, providing the backbone and context for many intelligent applications. Given the scale and the complexity of such applications, there is a growing agreement that they need to be supported by knowledge that is curated, accurate and credible. This is particularly critical when considering the impressive performance of large language models for tasks including text classification, question answering and text generation.
Large Language Models, including Generative Pre-Trained Transformers such as GPT 3.5 and GPT-4, are specifically designed for natural language processing tasks, and are trained on a vast amount of text data available on the Web.
Knowledge graphs provide an explicit and shared model of the knowledge used by intelligent applications, be it domain specific (e.g. applications that use medical ontologies) or general purpose, as Wikidata or DBpedia and its ontology. The construction of these knowledge bases is often time consuming, requires extensive human curation and can highlight differences amongst the stakeholders. Pre trained language model have been proposed to support different activities of the knowledge based engineering process at large, including (but not exclusively):
- Drafting of ontology classes and properties;
- Knowledge base population and refinement;
- Generation and retrofitting of competency questions;
- Human in the loop activities;
- Knowledge base alignment;
- Extraction of rich knowledge structures;
- Evaluation of knowledge graphs.
We invite original contributions that report experiences of using large language models in the context of knowledge engineering. Exploratory studies and reports of negative results are particularly welcome.
Contributions will be reviewed with respect to:
- Novelty and significance of the proposed approach
- Soundness in the methodology
- Potential impact of the proposed approach or results
- Clarity and quality of presentation
Accepted papers will be allocated a time slot for presentation and there will be time available for discussing the results obtained and the experience matured in using these approaches.
|February 21, 2024
|February 29, 2024
|Notification to authors
|April 11, 2024
|Camera-ready papers due
|April 25, 2024
All deadlines are 23:59 anywhere on earth (UTC-12).
- ESWC will not accept work that is under review or has already been published or accepted for publication in a journal, in another conference, or in another ESWC track.
- Authors of papers must pre-submit an abstract by the abstract deadline.
- Papers accepted for publication will appear in the companion proceedings of the conference, part of the Springer’s Lecture Notes in Computer Science series. The preprints of the accepted papers will be available openly.
- Papers must not exceed 8 pages (plus unlimited references) and be in English.
- Submissions must be either in PDF or in HTML, formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions. For HTML submission guidance, see the HTML submission guide.
- At least one author per contribution must register for the conference for presentation.
- Submission is done through EasyChair.
Oscar Corcho, Universidad Politécnica de Madrid, Spain, firstname.lastname@example.org
Paul Groth, University of Amsterdam, Netherlands, email@example.com
Elena Simperl, King’s College London, United Kingdom, firstname.lastname@example.org
Valentina Tamma, University of Liverpool, United Kingdom, V.Tamma@liverpool.ac.uk