Skip to content
On this page

Tooling

On this page

The way the beginnings of this project has been started, was through the use of the KnightLab Timeline tool. This requires the user to get the information and populate the timeline. This is discussed in TheTimelinesProject.

one of the considerations is how to form a decentralised methodology for storing information. this is directly tied to the requirements for PermissiveCommonsTech generally. As GIT has qualities that include how it is in-effect, decentralised, the assumption is presently that a git / http based method, is worthy of investigation.

A tool that is being explored is: https://github.com/AKSW/QuitStore

As a consequence of the large-scale employment of DIDs, a solution that makes use of them increasingly appears to be a requirement; although, the hope is, that significant changes will occur in future to the way these 'identifiers' are designed to function & in-turn be employed.

As is exampled by github and other git related platforms; information can be presented to HTTP for read-purposes; but for write and permissions related purposes, authentication is required. GIT employs both PGP and SSH. Which is thereby being explored.

https://github.com/WebCivics/did-method-git https://github.com/WebCivics/did-method-ssh

There may be some benefit in seeking to form an integration between the QuitStore works and a GIT related method; with or without the SSH related method being coupled (integrated) or loosely coupled (separate method).

In-terms of then how to form a discovery method; the answer to this is still unclear.

It may be that a method could use bit-torrent and/or magnet links.

Another method that might also be interesting is to use gun.eco, however this hasn't yet been explored.

Whilst the initial process is focused upon public information; the objective purpose is to end-up with a soltion that can be effectively advanced to support presentations involving private and sensitive information.

UI / UX

The User-Interface (UI) / User Experience (UX) methodology; is unclear presently. The considerations will be described in more detail in the SpaceTimeAppRequirements note, that will in-turn provide a better definition of what the requirements and challenges will become.

Ingest Tools

The process of manually finding references then adding them, is time-consuming, and also not necessarily supportive of any underlying probity / verifiability requirements; that relates to whether or not the linked resources remain available as an exact copy of the original doc.

This is thought to invoke considerations about two aspects; the first, is that there should be a methodology that supports the means to make a back-up, which in-turn relates also to the broader PermissiveCommonsRequirements. The second, is that there should be some sort of browser plugin or app, that accelerates the means to add records and related informatics.

There are potentially two somewhat optimal solutions to be considered.

the first, is some sort of browser-plugin; however, this may not provide the 'best path' towards delivering a fit-for-purpose outcome; as such, the alternative is to look to advance the browser project, likely using the beaker-browser foundations; but, this is to be examined.

Semantic parsing

Each document has an array of key topics and concepts; which should be able to be semantically tagged, so that records can be collated in a way that benefits from semantic enrichment of the documents, and therefore also, the ability to represent complexities.

Historically, there were examples of how this could be achieved online; however, i cannot find them quickly anymore, some links, no-longer exist.

The objective process may require a means to form a local methodology, that would be built using semantic web ontologies that exist presently; rather than seeking to use sense related methods that this project is in-turn seeking to be a stepping stone towards bringing about.

There are a few different types of challenges relating to different types of media; but the focus will start with english text documents, and therafter also, related media. The methodology should be considerate of the future need, to ensure multi-lingual support.

HyperMedia packages

the results end-up with a requirement to produce a version of the 'hypermedia packages'.. This would include the means to store the original artifacts, alongside files for discovery and semantic notations; and representation of the content.

Issues & confusion

The first point relating to 'issues' and indeed also, my 'confusion', is that this project relates to the HumanCentricAI works, the Webizen works, WebCivics works more broadly; as well as, consequentially, the peace infrastructure project and trust factory works. Its a fairly instrumental piece of the puzzle to solve. Yet, the means to address the 'sense' issue, is absolutely enhanced through the development of these works; as the applied experiment / tests, upon DIDSSICovidSonglinesse, is far simplier; than the languages of our human family, overtime, space, place, events, etc. as needs to be provided a fabric of support, for use with Ai Agents, of which, i am particularly focused upon the Webizen works.

Whilst being supportive on a best efforts basis, with others seeking implementations broadly.

Therein - the particular topic, is intended to support better comprehension of what exactly it is that they've been making and whom it is therefore assumed to be 'fit for purpose' to be applied upon. notwithstanding the comprehension, that it raises, sensitive topics; for which, insights are considered to be helpful for the very many, even if not desired by the very few.

I think the repo should be in Web Civics; and that, the matter of the destination of the documention be reserved, noting that the project is deemed important for the sense project.

The docs will be added to TheTimelinesProject folder, and then associated with the the practice method insights associated with the [[DIDSSDIDSSICovidSonglines

Edit this page
Last updated on 3/10/2023