NASSCOM's Guidelines On Responsible Use of Generative AI: Startup Takeaways

NASSCOM releases guidelines for responsible use of generative AI. By following these guidelines, startups can gain trust, navigate ethical challenges, and position themselves for long-term success in the AI landscape. Read on for details.

author-image
Swati Dayal
New Update
Regenerative AI

TICE Creative Image

India's apex software and services company body, the National Association of Software and Service Companies (NASSCOM), has unveiled a comprehensive set of guidelines aimed at defining the responsible use of generative artificial intelligence (AI). These guidelines target researchers, developers, and users of generative AI models and applications, with a strong emphasis on conducting thorough risk assessments and maintaining internal oversight throughout the entire lifecycle of a generative AI solution.

The guidelines released by NASSCOM for the responsible use of generative AI can also provide valuable support and direction to startups utilizing this technology.

Mitigating the Potential Harm of Generative AI

The NASSCOM guidelines address various risks associated with generative AI, including misinformation, intellectual property infringement, data privacy violations, propagation of biases, large-scale disruptions to life and livelihood, environmental degradation, and malicious cyberattacks. In a statement, NASSCOM emphasized the importance of promoting awareness about the adoption of these guidelines, developing specific guidance for different use cases, and enhancing the existing responsible AI resource kit. 

NASSCOM President Debjani Ghosh highlighted that this framework is unique to India and represents a proactive step toward building a transparent and robust roadmap for the responsible build and use of AI.

The draft guidelines were formulated through consultations with the technology industry, a multi-disciplinary group of AI experts, researchers, and practitioners, including representatives from academia and civil society.

The guidelines emphasize the obligations of researchers, developers, and users, emphasizing the importance of demonstrating reasonable caution, foresight, transparency, and accountability. To support the progress of humanity, researchers and developers are expected to exercise caution by conducting comprehensive risk assessments and maintaining internal oversight throughout the entire lifecycle of a generative AI solution.

Promoting Transparency and Accountability

Transparency and accountability are crucial aspects highlighted by the guidelines. Public disclosures of data and algorithm sources used for modelling, as well as other technical, non-proprietary details about the development process, capabilities, and limitations of AI solutions, are strongly encouraged. NASSCOM plans to raise awareness about these guidelines, develop specific guidance for different use cases, and enhance the existing Responsible AI Resource Kit to facilitate the adoption of responsible AI.

Importance Of Cautious Use and Risk Assessment

The guidelines stress the importance of cautious use and risk assessment to mitigate potential harm throughout the lifecycle of generative AI solutions. Developers are advised to publicly disclose data and algorithm sources unless they can demonstrate that such disclosures could harm public safety.

Grievance Redressal Mechanisms To Address Mishaps

To address mishaps during the development or use of generative AI solutions, the guidelines recommend the explainability of outputs generated by these algorithms and the implementation of grievance redressal mechanisms.

Paving the Way for a Harmonious Future

Anant Maheshwari, NASSCOM Chairperson and Microsoft India President, expressed his belief that these guidelines would help unleash the true potential of AI, creating a future that harmoniously blends human ingenuity with technological advancement.

In summary, NASSCOM's release of guidelines for responsible use of generative AI marks a significant step in regulating AI technology in India. These guidelines promote transparency, accountability, cautious use, and risk assessment, aiming to mitigate potential harm and build a roadmap for the responsible adoption of AI.

Key Takeaways for Startups: How These Guidelines Can Benefit Startups?

Clear Standards and Frameworks

The guidelines establish common standards and protocols for researching, developing, and using generative AI. This clarity helps startups navigate the complex landscape of generative AI, ensuring they adhere to responsible practices from the beginning. By following these guidelines, startups can build a strong foundation for their generative AI solutions.

Risk Assessment and Oversight

The guidelines emphasize the importance of conducting comprehensive risk assessments and maintaining internal oversight throughout the entire lifecycle of a generative AI solution. This approach helps startups identify and mitigate potential risks associated with their AI models and applications. By proactively addressing risks, startups can build more reliable and trustworthy AI systems.

Guidance for Different Use Cases

As the guidelines evolve, Nasscom plans to develop specific guidance for different use cases. Startups often operate in various domains and sectors, and tailored guidance for their specific industry can be immensely valuable. It will assist startups in addressing unique challenges and understanding the nuances of responsible generative AI implementation in their respective fields.

These guidelines can provide startups with a roadmap to navigate the ethical and responsible challenges associated with generative AI. By adhering to these guidelines, startups can build trust with users, investors, and regulators, fostering a positive reputation and positioning themselves for long-term success in the rapidly evolving AI landscape.

Subscribe