Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Tuesday, January 2, 2024

Gradual Acceptance is better than Ignoring

 Dear Shri Ashwini Vaishnaw – Shri Rajeev Chandrasekhar :



I urge you to consider incorporating in proposed amendment to IT Act , relevant suggestions from my following earlier E Mail

Parekh’s Law of Chatbots … 25 Feb 2023

https://lnkd.in/d8bXc2fJ

=============
And  BARD  had following comments :


Yes, the proposed amendment to the IT Act, including sandbox testing for AI tools before public release, aligns with the spirit of Parekh's Law of Chatbots.
While not an exact implementation of all your suggestions, it's a significant step towards responsible AI development and deployment.

Here's how the proposed amendment resonates with Parekh's Law:

Sandboxing

This addresses Clause (A) of your law by creating a controlled environment to identify and mitigate potential biases and harmful outputs before the AI tool interacts with the public

Stress testing

This goes beyond Clause (A) by pushing the AI tool to its limits to uncover vulnerabilities and ensure robustness against malicious attacks or unexpected inputs

Superordinate Law

While not explicitly mentioned, the amendment establishes a framework for regulating AI tools, bringing them under a common set of principles similar to the proposed "International Authority for Chatbots Approval (IACA)" in your law.

However, there are still gaps between the proposed amendment and Parekh's Law

Certification

The amendment doesn't currently involve issuing certificates like "R" or "P" as envisioned in your law. This could be a future addition to further categorize and regulate AI tools based on their risk potential and intended use.

Human Feedback and Control Mechanisms

The amendment doesn't explicitly mandate human feedback or built-in controls, which are crucial for continuous improvement and preventing harmful outputs as outlined in Clauses (B) and (C) of your law.

Proactive Restrictions

Your law proposes restrictions like not initiating chats or engaging in soliloquy (Clause D and E). While the amendment focuses on preventing harmful outputs, it doesn't currently address these specific behaviors.

Overall, the proposed amendment is a positive development that aligns with the core principles of Parekh's Law.

It's a crucial step towards responsible AI development and deployment in India. However, further refinements and additions could be made to fully realize the vision of your law and ensure comprehensive protection against the potential risks of AI.

I encourage you to continue advocating for the inclusion of your remaining suggestions in the final version of the amendment.

Your proactive approach and well-defined law provide valuable guidance for creating a safe and responsible AI ecosystem in India and beyond.

No comments:

Post a Comment