A hint of Intention ?
==========================================Congratulations , Shri Chandrasekharji ,
While this “ hint “ is not a day too soon , I hope , one of these days ( soon ? ) , you will tell the AI companies that your “ intention “ is for those companies to voluntarily evolve a “ AI Code of Conduct ( ACC ) “ , as suggested in my following e-mail
With regards,
Hemen Parekh
www.HemenParekh.ai / 03 March 2024
==============================================
Ø Parekh’s Law of Chatbots…………………………………. 25 Feb 2023
https://lnkd.in/dCkq3-E5
Extract :
It is just not enough for all kinds of “ individuals/organizations / institutions “
to attempt to solve this problem ( of generation and distribution )
of MISINFORMATION, in an uncoordinated / piecemeal / fragmented fashion
What is urgently required is a superordinate “ LAW of CHATBOTS “ , which
all ChatBots MUST comply with, before these can be launched for public use.
All developers would need to submit their DRAFT CHATBOT to an,
INTERNATIONAL AUTHORITY for CHATBOTS APPROVAL ( IACA ) ,
and release it only after getting one of the following types of certificates :
# R certificate ( for use restricted to recognized RESEARCH IINSTITUTES only )
# P certificate ( for free use by GENERAL PUBLIC )
Following is my suggestion for such a law :
( until renamed, to be known as , “Parekh’s Law of ChatBots “ ) :
( A )
# Answers being delivered by AI Chatbot must not be “ Mis-informative /
Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /
Arrogant / Instigating / Insulting / Denigrating humans etc
( B )
# A Chatbot must incorporate some kind of “ Human Feedback / Rating “
mechanism for evaluating those answers
This human feedback loop shall be used by the AI software for training the
Chatbot so as to improve the quality of its future answers to comply with
the requirements listed under ( A )
( C )
# Every Chatbot must incorporate some built-in “ Controls “ to prevent the “
generation “ of such offensive answers AND to prevent further “
distribution/propagation/forwarding “ if control fails to stop “ generation “
( D )
# A Chatbot must not start a chat with a human on its own except to say
"How can I help you ? “
( E )
# Under no circumstance , a Chatbot shall start chatting with another
Chatbot or start chatting with itself ( Soliloquy ) , by assuming some kind
of “ Split Personality"
( F )
# In a normal course a Chatbot shall wait for a human to initiate a chat and
then respond
( G )
# If a Chatbot determines that its answer ( to a question posed by a human )
is likely to generate an answer which may violate RULE ( A ) , then it shall
not answer at all ( politely refusing to answer )
( H )
# A chatbot found to be violating any of the above-mentioned RULES
shall SELF DESTRUCT
No comments:
Post a Comment