Appendix C. Knowledge Graphs

Describes what are the knowledge graphs

Very good explanation of what knowledge graphs are could be found on OntoText GraphDB website.

The heart of the knowledge graph is a knowledge model – a collection of interlinked descriptions of concepts, entities, relationships and events where:

  • Descriptions have formal semantics that allow both people and computers to process them in an efficient and unambiguous manner;

  • Descriptions contribute to one another, forming a network, where each entity represents part of the description of the entities related to it;

  • Diverse data is connected and described by semantic metadata according to the knowledge model.

Knowledge graphs, represented in RDF, provide the best framework for data integration, unification, linking and reuse, because they combine:

  • Expressivity: The standards in the Semantic Web stack – RDF(S) and OWL – allow for a fluent representation of various types of data and content: data schema, taxonomies and vocabularies, all sorts of metadata, reference and master data. The RDF* extension makes it easy to model provenance and other structured metadata.

  • Performance: All the specifications have been thought out, and proven in practice, to allow for efficient management of graphs of billions of facts and properties.

  • Interoperability: There is a range of specifications for data serialization, access (SPARQL Protocol for end-points), management (SPARQL Graph Store) and federation. The use of globally unique identifiers facilitates data integration and publishing.

  • Standardization: All the above is standardized through the W3C community process, to make sure that the requirements of different actors are satisfied – all the way from logicians to enterprise data management professionals and system operations teams.

Knowledge graphs are perfect data structure for representing Web 3 metadata, especially if we look at it from the data mesh architecture perspective, mainly because their distributed nature, provenance modelling, interoperability and standardization via well established semantic web protocols.

And the picture above speaks thousand words, where coffee shop brands can be tokenized as NFTs and can be described via various metadata properties, including location, company relations, published articles and facts inferred, allowing for finding suspicious business entities (speaking of using metadata for regulation).

Last updated