Alerts & Updates 19th Mar 2024
On March 1, 2024, the Indian Ministry of Electronics and Information Technology (“MeitY”) issued an Advisory (“the Advisory”)[1] for intermediaries and platforms[2] involved in the business of Artificial Intelligence (“AI”) Technology. While termed an “advisory”, the document outlined a set of mandates, instructing intermediaries to adhere to them immediately and provide a report on action taken to the MeitY within 15 days.
As the Advisory caused uproar within India’s growing startup community, the Minister of State for Electronics and Information Technology, Mr. Rajeev Chandrasekhar, quickly issued a clarification on social media platform X, that the Advisory was primarily aimed at large tech firms and would not be applicable to startups,[3] despite no indication to that effect in the Advisory. To further complicate matters, news emerged on March 15, 2024, of a revised Advisory from MeitY, rescinding certain aspects of the initial Advisory. This series of conflicting messages has left the tech industry grappling with uncertainty.
The resultant confusion surrounding the Advisory and its impact has sparked debate about responsibility and liability for firms deploying AI. In this explainer, we will delve into the implications of this Advisory on the AI landscape in India, while also exploring approaches adopted by other countries.
The Advisory is an addendum to a previous Advisory dated December 26, 2024, which laid down certain due diligence obligations to be met by intermediaries or platforms under the Information and Technology Act, 2000 (“IT Act”) and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”). The new Advisory addresses the use and development of AI model(s)/large language models/Generative AI, software or algorithms, by placing the following obligations on intermediaries and platforms–
Paragraph 2(a) of the Advisory states that intermediaries or platforms should ensure that their use of AI models or software do not permit users to host, display, or share unlawful content. Thereby effectively making the intermediaries or platforms responsible for desisting users from engaging with banned content.
Platforms should prevent any bias, discrimination, or threats to the integrity of the electoral process through the use of AI models or algorithms.
The original Advisory issued on March 1, 2024 mandated that the use of under-testing or unreliable AI models must have explicit permission from the Government of India and should be labelled appropriately for users.[7] Notably, based on media reports [8] dated March 15, 2024, it appears that MeitY revised the Advisory and has done away with this permission requirement. Instead, intermediaries are now directed that undertrial/unreliable models should be made available in India only after they are labelled to inform the users of the “possible inherent fallibility or unreliability of the output generated.”[9]
The Advisory mandates that intermediaries and platforms use terms of services and user agreements to the effect that users are sufficiently apprised of the risk or consequences of dealing with unlawful information.
If an intermediary facilitates the creation, generation or modification of information that could be used as misinformation or deepfake, it should be labelled or embedded with metadata or a unique identifier to identify its origin. Such labelling should enable the regulator to not only identify the nature of the subject information but also trace the following leads –
– Which intermediary’s software or computer resource was used to create, generate, or modify the information?
– Who is the user of such software or computer resource?
– Who is the creator or first originator of such information or deepfake?
COMPARISON OF THE ADVISORY WITH OTHER JURISDICTIONS
India currently lacks comprehensive legislation specifically addressing AI, resulting in various aspects of AI being regulated under existing laws. Although the MeitY Advisory represents a notable effort towards AI regulation in India, it does not establish a comprehensive legislative framework. While a Draft AI regulation framework is expected to be released by the Government in July,[12] an analysis of the MeitY Advisory as against global regulatory practices (existing or proposed) may be beneficial to identify best practices for such upcoming legislation. Set out below is a snapshot of the AI regulations being contemplated/enforced by some key jurisdictions.
Infocomm Media Development Authority (a statutory board under the Singapore Ministry of Communications and Information), along with the AI Verify Foundation have developed a draft Model AI Governance Framework for Generative AI (“Draft Framework”), issued in January 2024,[13] which expands upon a previously issued Model Governance Framework for Traditional AI, issued in 2020. While the draft is not structured in the form of a regulation, it outlines nine proposed dimensions aimed at bolstering a comprehensive and trustworthy AI ecosystem.
The Draft Artificial Intelligence Act (“Draft AI Act”)[14] was passed by the European Parliament on March 13, 2024.[15] The Draft AI Act aims to provide AI developers and deployers[16] with clear requirements and obligations regarding specific uses of AI.[17] The Draft AI Act will be enforced upon its formal adoption and translation. This marks the introduction of the first comprehensive regulation governing Generative AI across the globe.
Regulation of AI in the US is still in its nascent phase, lacking a comprehensive federal legislative framework. That said, Presidential Executive Order No. 14110 (“Executive Order”), issued on October 30, 2023 sets rules for American federal agencies and private companies to follow when dealing with the design, acquisition, and deployment of advanced AI systems.[18]
In the table below, we examine the key differences in regulatory approaches to the items covered in the MeitY Advisory in these jurisdictions:
Issue | India(Advisory issued by Ministry of Electronics and Information Technology on March 1, 2024) | Singapore (Proposed Model AI Governance Framework for GenAI, 2024) | European Union (Draft EU Artificial Intelligence Act, 2024) | United States of America (US Executive Order No. 14110) |
Access to unlawful content should not be intermediated | No access to unlawful content[19] | No specific direction/guideline on intermediating unlawful content is mentioned in the Draft Framework.
|
No such direction/guideline is mentioned in the Draft AI Act: While the Draft AI Act doesn’t include an explicit provision prohibiting platforms from intermediating unlawful content, other legislations address the dissemination of illegal content online, such as the Digital Services Act, 2022.[20] Further, the Draft AI Act identifies certain AI practices that are prohibited altogether. | No specific direction/guideline on intermediating unlawful content is mentioned in the Executive Order. |
Integrity of the electoral process is not threatened by AI or Generative AI systems | Don’t facilitate threat to electoral process:[21]
All intermediaries are generally directed to ensure that their computer resources do not permit any bias or discrimination or threaten the integrity of the electoral process. |
Employ technical solutions like labelling output information to effectively trace:[22]
In order to avoid threat to electoral process, one of the tools suggested is the usage of technical solutions such as digital watermarking and cryptographic provenance, instead of sole reliance on a general compliance direction. |
Classification as high-risk AI system with multi-pronged regulatory compliance directive:[23]
The Draft AI Act categorizes AI into different risk categories with different degrees of regulation for each category. These include minimal risk, limited risk, high risk and unacceptable risk AI systems.[24] AI systems possessing the capacity to influence the outcome of an election or voting behaviour of natural persons are classified as high-risk AI systems. High risk AI systems have specific and extensive compliance directions charted out individually for users, importers, distributors. Common requirements across categories of AI systems are data localisation, regular monitoring, incident response and CE conformity marking.[25] |
No specific obligation on intermediaries regarding electoral processes. |
Use of under-testing/unreliable AI models | Prior permission was originally envisaged but then replaced by a labelling requirement:[26]
Appropriate labelling is required such that the inherent fallibility or unreliability of the output generated by undertrial/unreliable AI is communicated. Government suggests use of the ‘consent popup mechanism’. |
No specific permission envisaged:[27]
The Draft Framework does not envisage seeking permission to use or develop under-trial AI models. Instead, the Draft Framework emphasizes that safety best practices need to be implemented by model developers and application deployers across the AI development lifecycle. |
General disclosure requirement for all general-purpose AI models:[28]
The Draft AI Act doesn’t categorise AI models as under-testing models. AI models across all development stages must disclose information with respect to their development process, which include design specifications of the model and training process, including training methodologies and techniques. Therefore, no permission needs to be sought from the Government for use of undertesting/unreliable AI models. |
No specific permission required, however Government should be notified when the model can be classified as dual use[29] and possesses a national security risk: [30]
The Executive Order states that, in accordance with the Defense Production Act, companies need to provide the Federal Government with information, reports, or records regarding training and development of dual-use foundational models. Even the intention to develop a dual-use AI model needs to be communicated to the Government. |
Informing users adequately | Obligation on intermediaries to adequately inform the users about the consequences of using unlawful information:[31]
All users are directed to be informed adequately through terms of service and user agreements about the consequences of using unlawful information on its platform |
Does not envisage a specific obligation on intermediaries and platforms to inform users about the consequences of AI systems/platforms. | Obligation on developers of high-risk AI systems to adequately inform deployers about the risk associated with the use of such systems.[32] Article 13(3)(a)(iii) state that the deployers must be made adequately aware of the risks associated with the use of high-risk AI systems and such information must be included in the instructions for use. Separately, Article 29(6) (b) requires deployers to inform natural persons using the AI deployed, that they are subject to the use of the high-risk AI system. | No specific obligation on intermediaries and platforms to inform users about the consequences of AI systems/platforms. |
Tackling deepfakes | Labelling obligation (metadata and identifiers) on intermediaries:[33]
Intermediaries facilitating synthetic creation, generation or modification of a text, audio, visual or audio-visual information in such a manner that it may be used as a deepfake, should ensure that such information is labelled or embedded with a permanent unique metadata or identifier which indicates that it has been created, generated or modified using computer resource of the intermediary, or identify the user of the software. |
Labelling obligation (cryptographic provenance and digital watermarking) on intermediaries[34] | Disclosure obligation on intermediaries:[35] Article 52(3) of the Draft AI Act mandates that AI systems facilitating creation of deepfakes shall explicitly disclose that the subject content is a deepfake. | No obligation on the intermediaries to report/desist creation of deepfakes under the Executive Order:
While the Executive Order is silent on the issue of deepfakes, the State of California passed two important bills addressing deepfakes in 2023 – the Assembly Bill (AB) 730[36] and 602,[37] respectively. AB 730 prohibited the use of deepfakes to influence political campaigns. Similarly, AB 602 criminalised non-consensual deepfake pornography and gave victims the right to sue people who create images using their likeness. |
The legal basis of this Advisory, amongst others issued in the past by the MeitY, is questionable. The term “advisory” is not defined under the enabling legislation, i.e., the IT Act or accompanying IT Rules.[38] While Section 13 of the IT Rules permits MeitY to issue appropriate guidance and advisories, this is limited to publishers i.e. publishers of news and current affairs content or a publisher of online curated content. Indeed, LLMs and other AI technology do not find mention in the IT Act. Incidentally, the question of whether the MeitY possesses residual powers to issue advisories, blocking actions, and to regulate online gaming is currently sub-judice.[39] Therefore, without a clear amendment to the IT Rules, the issuance of such ‘advisories’ to intermediaries and requiring compliance therewith may be considered legally tenuous.
The Advisory originally laid down a requirement that intermediaries and platforms must seek explicit permission from the Government before using and providing access to ‘under-testing or unreliable AI models.’ The Advisory did not specify the process to be followed for obtaining such permission, nor did it specify the criteria that would have been considered by the authorities in granting such permission.
While the MeitY’s recent revocation of this permission requirement is a welcome move for industry, the rapid issuance and revocation of obligations on intermediaries creates regulatory uncertainty. Further it continues remains unclear whether small platforms would need to adhere to the labelling requirements for under trial or unreliable AI models that are presently prescribed. Furthermore, the most recent version of the Advisory is not accessible in the public domain. Typically, advisories of this nature are not released to the public by MeitY. This lack of transparency adversely affects users and the general public, who should have clear insight into the obligations placed on the intermediaries they interact with, along with the associated implications. Such rules should ideally be developed through extensive stakeholder consultations to ensure that they align with the practicalities of conducting business and take into account the interests of all those impacted by them.
Accountability is crucial to ensure that all players in the AI development chain are responsible to end-users. The current Advisory as well as the December 2023 advisory issued by the MeitY apply only to intermediaries and platforms, applying a single yardstick for various types of AI deployed, regardless of risk. Further, the Minister’s belated clarification[40] excludes an entire foundational level of the development chain, i.e. startups, from from such obligations.
The development of Generative AI involves multiple levels in the tech stack. Considering this, the Draft Framework in Singapore proposes allocation of responsibility for complying with accountability measures be allocated based on the level of control each stakeholder has in the AI development chain[41] and similarly, the EU Draft AI Act sets out different requirements to be met by developers and deployers, and also based on the risk profile of the AI system itself. Given the intricate nature of AI system development, deployment, and usage, a singular set of obligations may not suffice for robust regulation. It remains to be seen whether the Government will further refine its position to introduce a more nuanced approach to regulating different players in the AI space.
Until the issuance of this Advisory, there was no law or regulation in India which directly addressed the issue of deepfakes. Although, Section 66D of the IT Act lays down prohibitions and penalties for impersonating an individual, the scope of this provision is narrow extending only to cases of cheating. The recent inclusion of deepfakes in the Advisory, while a step forward, primarily instructs intermediaries to only label such content and does not define the term deepfake. Further, the Advisory appears to adopt a catch all approach to deepfakes. This approach can be challenging, as AI systems used to create deepfakes can also be considered valuable tools for arts and culture. For instance, museums have started using deepfakes to enhance audience engagement.[42] Acknowledging this complexity, the EU’s Draft AI Act includes a carve-out for artistic, creative, satirical, fictional, or analogous works or programs that exempts such content from certain transparency obligations, so as to not hinder the display or enjoyment of the work. [43] Such carve-outs may be viewed by some industries as essential to allow AI to support creative processes and uphold freedom of speech and expression.
In response to this Advisory, the industry has voiced strong concerns. The frequent introduction and rescinding of such “advisories” without legal backing as well as the issuance of clarifications on social media create regulatory uncertainty and may ultimately have a chilling effect on business. These may impede the development and use of AI within the country. Furthermore, as the global landscape shifts towards reducing barriers to digital services trade, the selective application of regulations primarily targeting large corporations could be perceived as a trade barrier by India’s international counterparts. At the same time the real risks posed by the utilization of AI technologies such as deepfakes, which can manipulate information, erode trust in the media and potentially violate privacy, cannot be ignored. While the Advisory addresses issues which were previously unexplored, it is evident that a well thought out and comprehensive regulatory framework is required. Such a framework should establish objective criteria for assessing AI models and associated risks. This could entail categorizing AI models based on defined thresholds, which would determine the need for additional oversight measures to mitigate potential incidents or rights violations arising from AI engagement. Moreover, such law would also need to set out explicit obligations for both AI developers and platforms utilizing AI.
While a draft legislation is expected, lessons can be learned from the global discourse and initiatives surrounding AI regulation.
We trust you will find this an interesting read. For any queries or comments on this update, please feel free to contact us at insights@elp-in.com or write to our authors:
Sanjay Notani, Partner, Email – SanjayNotani@elp-in.com
Parthsarathi Jha, Partner, Email – ParthJha@elp-in.com
Naghm Ghei, Principal Associate, Email – NaghmGhei@elp-in.com
Shweta Kushe, Associate, Email – ShwetaKushe@elp-in.com
[1] Advisory No. eNo. 2(4)/2023-CyberLaws-3 issued by the Ministry of Electronics and Information Technology, Cyber Law and Data Governance Group (March 1, 2024) (Accessed at – https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf)
[2] Intermediaries are defined under Section 2(1)(w) of the IT Act, 2000.
[3] Suraksha P, ‘Govt’s AI advisory will not apply to startups: MoS IT Rajeev Chandrasekhar,’ The Economic Times (March 4, 2024) (Accessed at – https://economictimes.indiatimes.com/tech/technology/govts-ai-advisory-will-not-apply-to-startups-mos-it-rajeev-chandrasekhar/articleshow/108197797.cms?from=mdr)
[4] Paragraph 2(a), Advisory (n1)
[5] Paragraph 2(b), Advisory (n1)
[6] Paragraph 2(c), Advisory (n1)
[7] Ibid
[8] BL Mumbai Bureau, ‘MeitY revises AI advisory after push back from insudtry,’ The Hindu Businessline (March 17, 2024) (Accessed at – https://www.thehindubusinessline.com/info-tech/meity-revises-ai-advisory-after-push-back-from-industry/article67961693.ece/amp/)
[9] PTI, ‘AI Regulations Update: Permit-Free Development But Mandatory Labeling, Says Government Advisory,’ The Free Press Journal (March 16, 2024) (Accessed at – https://www.freepressjournal.in/business/ai-regulations-update-permit-free-development-but-mandatory-labeling-says-government-advisory)
[10] Paragraph 2(d), Advisory (n1)
[11] Paragraph 3, Advisory (n1)
[12] PTI, ‘Draft AI regulation framework to be released by July: Rajeev Chandrasekhar,’ The Indian Express (February 20, 2024) (Accessed at – https://indianexpress.com/article/india/draft-ai-regulation-framework-to-be-released-by-july-rajeev-chandrasekhar-9171854/)
[13] Proposed Model AI Governance Framework for Generative AI (January 16, 2024) (Accessed at – https://aiverifyfoundation.sg/downloads/Proposed_MGF_Gen_AI_2024.pdf)
[14] Draft EU Artificial Intelligence Act, 2024 (Accessed at – https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf)
[15] AP, ‘EU Parliament gives final nod to landmark AI law,’ The Economic Times (March 14, 2024) (Accessed at – https://economictimes.indiatimes.com/news/international/world-news/eu-parliament-gives-final-nod-to-landmark-ai-law/articleshow/108473484.cms?from=mdr)
[16] The notion of ‘deployer’ is stated to be any natural or legal person, including a public authority, agency, or other body, using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. Depending on the type of AI system, the use of the system may affect persons other than the deployer.
[17] AI Act, The European Commission (Accessed at – https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai)
[18] Presidential Executive Order No. 14110, US Presidential Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023) (Accessed at – https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)
[19] Paragraph 2(a), Advisory No. eNo. 2(4)/2023-CyberLaws-3 issued by the Ministry of Electronics and Information Technology, Cyber Law and Data Governance Group (March 1, 2024) (Accessed at – https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf). Furthermore, neither the IT Act nor any of its associated Rules provide a clear definition of the term “unlawful content.”
[20] Article 15 and 16, The EU Digital Services Act, 2022 (Accessed at – https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R2065)
[21] Paragraph 2(b), Advisory No. eNo. 2(4)/2023-CyberLaws-3 issued by the Ministry of Electronics and Information Technology, Cyber Law and Data Governance Group (March 1, 2024) (Accessed at – https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf)
[22] Page No. 17, Proposed Model AI Governance Framework for Generative AI (January 16, 2024) (Accessed at – https://aiverifyfoundation.sg/downloads/Proposed_MGF_Gen_AI_2024.pdf)
[23] Paragraph Nos. 14 and 14a, Page No. 23, Draft European Union Artificial Intelligence Act, 2024 (Accessed at – https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf)
[24] AI Act, The European Commission (Accessed at – https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai)
[25] Chapter 5, Draft EU Artificial Intelligence Act, 2024 (Accessed at – https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf)
[26] Paragraph 2(c), Advisory No. eNo. 2(4)/2023-CyberLaws-3 issued by the Ministry of Electronics and Information Technology, Cyber Law and Data Governance Group (March 1, 2024) (Accessed at – https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf)
[27] Page No. 10, Proposed Model AI Governance Framework for Generative AI (January 16, 2024) (Accessed at – https://aiverifyfoundation.sg/downloads/Proposed_MGF_Gen_AI_2024.pdf)
[28] Annexure IXa, Draft EU Artificial Intelligence Act, 2024 (Accessed at – https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf)
[29] According to Section 3(k) of the US Presidential Executive Order No. 14110, dual-use foundation model means an AI model that is trained on broad data; generally uses self-supervision and is applicable across a wide-range of contexts; and exhibits high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety. Therefore, under this wide definition, Generative AI models will also fall.
[30] Section 4.2., US Presidential Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Presidential Executive Order No. 14110) (Accessed at – https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)
[31] Paragraph 2(d), Advisory No. eNo. 2(4)/2023-CyberLaws-3 issued by the Ministry of Electronics and Information Technology, Cyber Law and Data Governance Group (March 1, 2024) (Accessed at – https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf)
[32] Articles 13(3)(a)(iii) and 29(6b), Draft European Union Artificial Intelligence Act, 2024 (Accessed at – https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf)
[33] Paragraph 3, Advisory No. eNo. 2(4)/2023-CyberLaws-3 issued by the Ministry of Electronics and Information Technology, Cyber Law and Data Governance Group (March 1, 2024) (Accessed at – https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf)
[34] Page No. 10, Proposed Model AI Governance Framework for Generative AI (January 16, 2024) (Accessed at – https://aiverifyfoundation.sg/downloads/Proposed_MGF_Gen_AI_2024.pdf)
[35] Article 52(3), Draft European Union Artificial Intelligence Act, 2024 (Accessed at – https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf)
[36] Assembly Bill No. 730 – Elections: deceptive audio or visual media, California State Assembly (October 3, 2019) (Accessed at – https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730)
[37] Assembly Bill No. 602 – Depiction of individual using digital or electronic technology: sexually explicit material: cause of action, California State Assembly (October 3, 2019) (Accessed at – https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602)
[38] Apar Gupta, ‘In issuing AI advisory, MEITY becomes a deity,’ The Hindu (March 15, 2024) (Accessed at – https://www.thehindu.com/opinion/lead/in-issuing-ai-advisory-meity-becomes-a-deity/article67951767.ece)
[39] The question whether the MeitY has any residual powers under the Information Technology Act, 2000 is currently being heard by the Bombay High Court (Kunal Kamra v. Union of India, W.P. No. 14955 of 2023) and the Delhi High Court (Social Organization for Creating Humanity v. Union of India, W.P. No. 8946 of 2023) in two separate motions wherein two different IT Rules are challenged. Although, the Karnataka High Court in X Corp. v. Union of India, W.P. No. 13710 of 2022, has already held that the MeitY’s blocking actions are well within the scope of the IT Act, 2000.
[40] Ashutosh Mishra & Peerzada Abrar, ‘Govt’s AI advisory only for large platforms, not for startups: MeitY,’ The Business Standard (March 4, 2024) (Accessed at – https://www.business-standard.com/industry/news/govt-s-ai-advisory-only-for-large-platforms-not-for-startups-meity-124030400468_1.html)
[41] For instance, Singapore’s Proposed Model AI Governance Framework for GenAI, 2024 suggests an ‘ex-ante’ approach to allocating accountability in the AI development chain. The framework draws parallels with the cloud industry, which has developed comprehensive shared responsibility models over time. These models serve as a foundation for accountability, but they require supplementation with measures such as indemnity and insurance. According to Singapore’s proposed framework, AI developers have already started underwriting risks like third-party copyright claims. This demonstrates their acknowledgment of responsibility for model training data and usage.
[42] Dami Lee, ‘Deepfake Salvador Dali takes selfies with museum visitors,’ The Verge (May 10, 2019) (Accessed at – https://www.theverge.com/2019/5/10/18540953/salvador-dali-lives-deepfake-museum)
[43] It states, “Where the content forms part of an evidently artistic, creative, satirical, fictional analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work”, Article 52(3), Draft EU Artificial Intelligence Act, 2024 (Accessed at – https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf).
As per the rules of the Bar Council of India, lawyers and law firms are not permitted to solicit work or advertise. By clicking on the "I Agree" button, you acknowledge and confirm that you are seeking information relating to Economic Laws Practice (ELP) of your own accord and there has been no advertisement, personal communication, solicitation, invitation or any other inducement of any sort whatsoever by or on behalf of ELP or any of its members to solicit any work through this website.