<p>Recently, the Ministry of Electronics and Technology (MeitY) issued an advisory to intermediaries, ensuring compliance with the existing IT rules. The directive specifically targets concerns about AI-generated misinformation, particularly deepfakes. The advisory urges intermediaries to clearly and precisely communicate prohibited content, especially those outlined in Rule 3(1)(b) of the IT Rules, to users. It is anticipated that MeitY will introduce significant changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.</p>.<p>The proposed amendments aim to regulate artificial intelligence (AI) and generative AI models, recognising the imperative to stay ahead in supervising the growing field of AI technologies. While details of the amendments remain undisclosed, their potential impact on issues like online fraud, deepfake technology, and synthetic content signals a significant step towards fortifying regulatory frameworks.</p>.<p>As the world witnesses an unprecedented surge in AI innovation, the US and UK are coming forward to regulate AI, and the agreement on the EU’s AI Act underscores the need for a regulatory framework. An incident involving The New York Times suing OpenAI and Microsoft for allegedly using its content without permission highlights the urgency of regulations on generative AI. Unlike common data scraping practices, the lawsuit alleges that OpenAI and Microsoft encoded The New York Times articles directly into their AI models, enabling ChatGPT and Bing Chat (now Copilot) to reproduce the information without proper citation. <br>The lawsuit demands the removal of any chatbot trained using this data.</p>.<p>Meity’s acknowledgment of evolving generative AI technologies reflects a proactive approach to ensuring responsible development and deployment of these powerful tools. However, concerns about the proposed amendments facing delays until after the general election raise questions about the timeline and urgency of these regulatory measures. One of the critical areas the amendments are expected to address is online fraud. With the increasing number of financial scams and the exponential growth of online platforms and transactions, the threat of fraudulent activities has become more pronounced.</p>.<p>AI-powered tools have been both a boon and a bane in this context, offering advanced fraud detection mechanisms and also providing sophisticated tools for malicious actors. The proposed changes, therefore, hold the promise <br>of creating a robust framework that strikes a balance between fostering innovation and <br>safeguarding users from online threats.</p>.<p>Deepfake technology is another focus, aiming to tackle concerns about misinformation, identity theft, and erosion of trust in digital media. By incorporating regulations specific to AI-generated content, the government aims to address the ethical and societal implications of these technologies. Striking the right balance between freedom of expression and the prevention of malicious use will be a difficult task, requiring nuanced regulations that can adapt to the rapidly changing landscape of AI capabilities.</p>.<p>The proposed amendments also draw attention to the broader impact on the AI sector and technology landscape. As businesses increasingly integrate AI into their operations, regulatory clarity becomes crucial for fostering innovation while ensuring ethical and responsible practices. Start-ups and established players alike will be keenly observing the developments, as the proposed regulations have the potential to shape the trajectory of the AI industry in the country.</p>.<p>Considering the global momentum towards AI regulation, India’s efforts to craft a tailored regulatory framework for AI are both commendable and challenging. The delicate task of creating regulations that foster innovation without stifling growth requires a nuanced understanding of the technology, its potential risks, and the evolving needs of the industry. MeitY’s commitment to engaging with these complexities is vital for establishing a regulatory environment that not only safeguards against potential harms but also promotes India as a hub for responsible AI development.<br>Despite the secrecy surrounding the proposed amendments, they hold the promise of shaping a more secure and ethical AI.</p>.<p><em>(The writer is a technology lawyer)</em></p>
<p>Recently, the Ministry of Electronics and Technology (MeitY) issued an advisory to intermediaries, ensuring compliance with the existing IT rules. The directive specifically targets concerns about AI-generated misinformation, particularly deepfakes. The advisory urges intermediaries to clearly and precisely communicate prohibited content, especially those outlined in Rule 3(1)(b) of the IT Rules, to users. It is anticipated that MeitY will introduce significant changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.</p>.<p>The proposed amendments aim to regulate artificial intelligence (AI) and generative AI models, recognising the imperative to stay ahead in supervising the growing field of AI technologies. While details of the amendments remain undisclosed, their potential impact on issues like online fraud, deepfake technology, and synthetic content signals a significant step towards fortifying regulatory frameworks.</p>.<p>As the world witnesses an unprecedented surge in AI innovation, the US and UK are coming forward to regulate AI, and the agreement on the EU’s AI Act underscores the need for a regulatory framework. An incident involving The New York Times suing OpenAI and Microsoft for allegedly using its content without permission highlights the urgency of regulations on generative AI. Unlike common data scraping practices, the lawsuit alleges that OpenAI and Microsoft encoded The New York Times articles directly into their AI models, enabling ChatGPT and Bing Chat (now Copilot) to reproduce the information without proper citation. <br>The lawsuit demands the removal of any chatbot trained using this data.</p>.<p>Meity’s acknowledgment of evolving generative AI technologies reflects a proactive approach to ensuring responsible development and deployment of these powerful tools. However, concerns about the proposed amendments facing delays until after the general election raise questions about the timeline and urgency of these regulatory measures. One of the critical areas the amendments are expected to address is online fraud. With the increasing number of financial scams and the exponential growth of online platforms and transactions, the threat of fraudulent activities has become more pronounced.</p>.<p>AI-powered tools have been both a boon and a bane in this context, offering advanced fraud detection mechanisms and also providing sophisticated tools for malicious actors. The proposed changes, therefore, hold the promise <br>of creating a robust framework that strikes a balance between fostering innovation and <br>safeguarding users from online threats.</p>.<p>Deepfake technology is another focus, aiming to tackle concerns about misinformation, identity theft, and erosion of trust in digital media. By incorporating regulations specific to AI-generated content, the government aims to address the ethical and societal implications of these technologies. Striking the right balance between freedom of expression and the prevention of malicious use will be a difficult task, requiring nuanced regulations that can adapt to the rapidly changing landscape of AI capabilities.</p>.<p>The proposed amendments also draw attention to the broader impact on the AI sector and technology landscape. As businesses increasingly integrate AI into their operations, regulatory clarity becomes crucial for fostering innovation while ensuring ethical and responsible practices. Start-ups and established players alike will be keenly observing the developments, as the proposed regulations have the potential to shape the trajectory of the AI industry in the country.</p>.<p>Considering the global momentum towards AI regulation, India’s efforts to craft a tailored regulatory framework for AI are both commendable and challenging. The delicate task of creating regulations that foster innovation without stifling growth requires a nuanced understanding of the technology, its potential risks, and the evolving needs of the industry. MeitY’s commitment to engaging with these complexities is vital for establishing a regulatory environment that not only safeguards against potential harms but also promotes India as a hub for responsible AI development.<br>Despite the secrecy surrounding the proposed amendments, they hold the promise of shaping a more secure and ethical AI.</p>.<p><em>(The writer is a technology lawyer)</em></p>