Responsible AI For Developers: Resources For Self-Guided Learning

Nitya Narasimhan, Ph.D - Jan 31 - - Dev Community

Three Actions You Can Take To JumpStart The Journey:

  1. Visit the Responsible AI Developers Hub
  2. Bookmark the Responsible AI Developers Collection
  3. Complete the Responsible AI Dashboard Learn Module

What is Responsible AI?

If you've built AI applications or trained a machine learning model, you've probably heard the term responsible AI being used in combination with words like "evaluation", "fairness" or "explainability". So what do these mean? And why should you care about responsible AI? And most importantly, how can you skill up on this with self-guided learning resources?

By one definition Responsible AI is an approach to "assessing, developing and deploying AI systems in a safe, trustworthy and ethical manner". Rapid growth in Large Language Models (LLM) and Generative AI (apps) is accelerating adoption of AI in every aspect of our daily lives. And this is raising concerns around data privacy, and the ability of these systems to operate fairly and reliably when making decisions that have the potential to cause harm to individuals and society.


6 Principles Of Responsible AI

To address this, Microsoft identified six principles to guide AI development and usage. These are:

  1. Fairness | AI Systems should allocate opportunities or resources in a fair and unbiased manner, across all users.
  2. Reliability & Safety | Systems should work correctly and reliably, even under conditions or contexts it may not have been designed for.
  3. Privacy and Security | Systems should protect data and preserve user privacy in all operations.
  4. Inclusiveness | Systems should function in a manner that is inclusive to people of all abilities.
  5. Transparency | System should be understandable to its users so that it is not misused or misinterpreted.
  6. Accountability | Humans should be accountable for system operations by means of appropriate oversight.

Principles of Responsible AI


The Responsible AI Developer Hub

How can we apply these principles in practice, as we build our machine learning models or generative AI applications? How can we integrate these practices into our developer workflows? To help developers answer these questions and more, we setup the Responsible AI Developer Hub - a site with training materials and resources for the self-guided learner.

Responsible AI Developer Hub

The site currently has three workshops that help you learn responsible AI practices for model debugging, prompt engineering (and LLMOps) and content safety for generative AI experiences. Let's take a quick look at each.


Responsible AI Dashboard Workshop

Building a machine learning model for your application domain with a relevant data set? How can you ensure that your model follows responsible AI practices? Traditional model performance metrics (e.g. accuracy) provide aggregated results which are insufficient to pinpoint where biases or model errors exist that could negatively affect people. This hands-on lab teaches you how to debug your model using a number of tools (error analysis, model performance, data representation, interpretability etc.) to assess it for fairness, transparency, inclusiveness and more. You'll start by training a model with a given dataset, registering the trained model on Azure, then creating a customized Dashboard to help you debug this visually.

The Dashboard is a single pane of glass experience that lets you seamlessly integrate and use a variety of toolkit components from the same UI.


Azure Content Safety Workshop

The Dashboard can help you debug the underlying model for fairness, reliability etc. But ensuring responsible AI experiences with generative AI content poses additional challenges in making sure that the prompts and responses do not include offensive, harmful or inappropriate contnet in text and images.

With the Azure Content Safety tools, developers can now moderate text, image and multimodal content using out-of-the-box AI models with built-in blocklists that you can customize further. In this workshop, you'll learn to detect and flag text that is unsuitable for end users, block inappropriate images, and create applications that present a safe and friendly tone to your audience.


Prompt Flow Evaluation Workshop

Generative AI applications rely on prompt engineering to improve the quality and relevance of the responses generated by the underlying model. Prompt engineering it itself a tedious process requiring orchestration of multiple steps in a "prompt flow" before the final model prompt can be generated, and the result returned to the user.

Responsible AI in Prompt Engineering

To infuse responsible AI tools and practices in your LLMOps it is not enough to understand the best practices as identified in that article above. Instead, you will need to iteratively test and refine the process, evaluating the results each time until you get the desired outcome. Azure, with Prompt Flow, provides a comprehensive evaluation toolkit with sample flows that you can use (and customize) to evaluate metrics like groundedness, relevance etc.

In this workshop, you'll learn to create a chat application using an LLM with a RAG architecture - then use the QnA RAG Evaluation template to test the flow for relevant metrics.


Related Resources

Want to learn more about Responsible AI? Bookmark our Collection and watch for updates. You can also follow the authors on dev.to for news on upcoming events and content to help you skill up further.

Last but not least, check out the Responsible AI Toolbox suite from Microsoft Research, for an open-source version of the tools that you can use for a customized end-to-end responsible AI experience.

Build AI Apps. And Build Responsibly.🛡


Three Actions You Can Take To JumpStart The Journey:

  1. Visit the Responsible AI Developers Hub
  2. Bookmark the Responsible AI Developers Collection
  3. Complete the Responsible AI Dashboard Learn Module
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .