NobodyAI
The concept of Nobody AI refers to AI and/or software solutions that are designed in such a way whereby the use of those systems may cause harm to others; but that it is the fault of no human being - by design. The implication being that companies and software agents can be used to knowingly harm others, in a manner that is designed to seek to ensure there are no penalities applied upon any human beings as a consequence of how those systems impact other human beings, as a consequence of the design of how #NobodyAI is intended to be operated - by the human beings that have designed those systems.
"All Care, no responsibility. #NoBodyAI"
Some Illustrative Examples:
Ie: like robodebt ( LINK1 LINK2 ); or say for example, You lost your child because government made a bunch of mistakes? so, after the irraversible consequences have played out; and an opportunity for the situation to be reviewed occurs (noting - this is a fictional use-case) many years down the track; the OFFICIAL answer becomes,
oh, that was the fault of nobody; Yes, we're all paid / compensated in our jobs working for government; but, we've made this AI system, that means we're never responsible for anything that happens to anyeone. its our #NobodyAI platform - that will do whatever we tell it to do to you, and if that's bad or if you make enough noise about it; the enquiry, the court, whoever is asking questions will be told clearly - it was no-bodies fault, the decisions were all made by our #NobodyAI platform. if you have a problem with that - take it up with the department of responsibility... Which is probably, run out of the US via another AI platform.
The Webizen ecosystem is the opposite of a NobodyAI design and any such form of related ecosystem.
Note:
- [[WebScience/PeaceInfrastructureProject/SafetyProtocols/HumanCentricAI]]
- [[HumanCentricAIEthics]]
- HumanCentricDigitalIdentity
- [[PCT-Webizen-Notes/Webizen/EconomicSystems/Centricity]]