Skip to content

TopLevelOntologyNLPResearch

The content below was initially produced for the purpose of sending some people (luminaries) an email about the specific area of research / problem/solution definitions.

Work is continuing in relation to figuring out how to best address the fundamental requirement to define a database structure that is technically appropriate (ie: it'll work) whilst moreover, able to best address the sorts of problems that the webizen work seeks to do more generally, at large, etc.

The NOTES:

The ecosystem is intended to produce 'personal ai' or 'human centric ai', which thereafter supports the relationship between human beings and what they do. There are 'values credentials', which is basically about defining existing instruments (ie: Human Rights instruments) in RDF; so that the informatics can support validation / consideration, about whether and/or how relationships are managed by people in relation to the concepts outlined in those instruments. Yet to implement, I need to define top-level ontology, and it is important to me that the ecosystem does not classify the informatics environments as subclasses of OWL:Thing. 

The ecosystem incorporates decentralising ontologies: 

As such; i have an opportunity to redefine it.  The HumanCentricAI 'agent' (webizen, owned by whomever purchases it - which essentially means atm - the underlying database) is designed to operate (privately) on a laptop/desktop (in the first instance); apps are defined in a way that advances the old RWW (read-write-web) works, with a variety of distinctions from solid. 

(belief is that the W3C Cognitive AI works will end-up playing an important role, but not sure (how) yet). 

Initial implementation target is intended to be done in golang, and likely use:  BadWolf (has temporal support). 

As i'm going through and defining 'permissive commons' (decentralised ontologies) there's an opportunity to redefine the structures of how they're made to work. Therein - there's an opportunity to redefine how top-level ontology is defined; and i'm puzzling how this might best be done.  

some notes: RootConcepts Therein - the general idea is that maybe the best approach might be to develop (train) a natural language model that can in-turn be used to define top-level ontology / DB structures.

Yet, i've not spent enough time historically getting stuck into the implementation related research areas of either - ML / NN models (ie: what can be done that employs relatively little computational power on a consumer device  - whilst potentially in future, extending to employ neuromorphic processing hardware, etc.); therein also - is this a job for prolog? i found https://github.com/ichiban/prolog which is also written in golang, but atm - idk...

The other part of the equation - whilst focused merely on English for the POC works; is to what depths any DB structure based upon the English language should go, in-order to seek to ensure limits on unwanted AI 'interference' that poorly impacts 'freedom of thought', etc.

[[ChatGPTDynamicOntology]]

Therein - i ended-up doing an experiment with ChatAI in relation to my surname: 

AChatGPTExperimentHolborn

( Noting - that I didn't get into the templar stuff, or a variety of other considerations related to the associated puzzle, which is in a much shorter timeframe than is commonly associated with indigenous history, for example - the initial research came about as a consequence of my research into the history of the banking system, as was required for the 'knowledge banking' ecosystem designs - that have since been shown to have failed, due to priorities of those the modelling intended to depend upon to support rule of law, human rights, access to justice, etc.  noting - the ideas about paying 'knowledge workers' UBI - is flawed; as that method, doesn't end-up with tax revenue - in association to piping the useful benefit of works to foreign jurisdictions, even if it is the supported practice engendered by a citizens (local jurisdictional) government). 

In anycase; Therein - the main insight that i felt i gained from that experiment (noting - the significance of that 'language model'); was about the importance of the predicates to / of, the english language; as to support better 'sense making'; which in-turn, has both geospatial and temporal attributes that end-up being informed by historical ("SpaceTime") events, etc.   

In that example; research about the history of the name and related historical events, seemed to indicate that 'holborn' (outside the city of london, that had a role with the 'old temple' and in-turn also the inns of court (when sought to be placed outside the walls of the city of london) noting also; that various types of people, weren't automatically allowed inside the walls of londinium...) had a more complex history than told in relation to the doomsday book.

I guess; this was intended to be a relatively simple example, of the broader problem; about 'knowledge systems modelling' and the underlying 'opportunity' that relates to how the foundational 'top level ontology' might now be modelled in a way that addresses issues that were either too hard decades ago - or set aside, due to the nature of the situation at the time.

Edit this page
Last updated on 2/9/2023