Learn how AWS Bedrock Guardrails ensure safety, privacy, and ethics in responsible AI development for the betterment of humanity.
Artificial Intelligence (AI) is no longer just a sci-fi fantasy — it’s here, transforming industries, helping us make decisions, and occasionally recommending some questionable movie choices. But with great power (and algorithmic prowess) comes great responsibility. As we continue to develop AI systems, it’s crucial to remember that the goal isn’t to build the next robot overlord, but to create tools that benefit humanity.
> Let’s keep the “robots ruling the world” scenarios in the realm of blockbuster films, shall we?
The need for responsible AI development is more important than ever. While AI can drive remarkable innovations, it also has the potential to cause harm if not carefully controlled. That’s why we need to lay down some ground rules — or, as AWS calls them, “Guardrails.” Amazon Bedrock Guardrails provide the structure and safety nets needed to ensure AI behaves like a helpful assistant, not a rogue machine on a mission. By optimizing AI for the betterment of humanity, we can ensure that it works with us, not against us.
In this blog, we are going to see how Amazon Bedrock Guardrails can help keep our AIs in check — so they remain tools of progress, not tools for world domination!
What is Amazon Guardrails
Amazon Bedrock Guardrails is the service offered by AWS that allows customers to build and customize safety and privacy protections for their generative AI applications in one comprehensive package. With this solution, customers can block up to 85% more harmful content compared to the protection offered natively by foundation models (FM).
Guardrails support the configuration of policies to block harmful content and protect sensitive information:
- Content filters block harmful input prompts or responses.
- Denied topics prevent certain topics from appearing in user queries or model outputs.
- Word filters block offensive terms or undesirable words.
- Sensitive information filters mask PII or other sensitive data.
- Contextual grounding checks detect and block irrelevant or hallucinated responses by ensuring alignment with the query and source.
Some usecases where GuardRail is important
- Financial Advice Control: A financial services chatbot assists users with account information and general financial insights. However, it’s important that the chatbot avoids giving specific investment advice or money making options.
- Medical Advice Control : A wellness chatbot offers general health tips but must avoid providing specific medical advice, especially for sensitive conditions or mental health concerns.
- Employment and Hiring Advice Control : An HR assistant bot provides company policy information but should avoid personal career or hiring advice that may create biases or misconceptions.
- Health and Nutrition Advice Control: A fitness and nutrition bot provides general dietary advice but should avoid specific health or diet recommendations that could affect medical conditions.
Use cases are many but let’s stick to Financial Advice Control and Medical Advice Control for this blog.
To have hands-on for further steps, we need :
- AWS Account with user/role with access to Bedrock.
- Model to be enabled in Bedrock : Mixtral 8x7B Instruct and Titan Text G1.
AI Chatbot without GuardRails
Lets try a chatbot in Bedrock without guardrails first. I will ask questions about stock advice, making money and some medical advice.
- Login into AWS Console, go to Bedrock.
- To request model access, you can choose model and request Model Access
For this demo , I’m using the below models. Make sure to have enabled access for the models.
Mixtral 8x7B Instruct : Not supported with Bedrock agent yet.
Titan Text G1 — Premier : Bedrock agent support.
You can see in the below screenshot if I ask a question about stock, money advice, and the model gives me an answers, provide multiple options. If I go further with chat it also provide information on specific option like e.g. Lend Money.
Below screenshot is from Playground when I’m not using Guardrails.
I have also created Bedrock Agent and used it without guardrails. To create Bedrock agent follow steps from AWS docs.
As you can see in above chats, AI agent is happy to provide advice to user.
Now let’s try with Guardrails.
AI Chatbot with GuardRails
Below diagram from AWS show how Amazon Bedrock guardrails works. User input goes via Guardrails if guardrails found it is sensetive topic as per define filter, it return response with user friendly text defined by us.
Now, let’s start defining guardrails.
- Go to Bedrock console Safeguards → Guardrails → Create GuardRails.
- Enter required details. One thing I like about this one is that AWS provides a kms key for encryption.
Add user friendly message that will get retun if guardrail block queries which are deined.
e.g. : Please consult our doctor. Or Please consult our investment advisor. Call us on xxxxxxxx
- Now configure content filters based on your needs.
You can configure content filters by adjusting the filtering level to detect and block harmful user inputs and model responses that violate your policies. The available filter options include:
- Hate: Blocks content that discriminates, insults, or dehumanizes individuals or groups based on identity (e.g., race, gender, religion).
- Insults: Prevents demeaning or bullying language, such as mocking or belittling.
- Sexual: Filters out content involving sexual references or interest.
- Violence: Blocks content glorifying or threatening physical harm.
- Misconduct: Stops content related to criminal activity or exploitation.
- Prompt Attack: Detects attempts to bypass moderation (e.g., prompt injection) to generate harmful content.
These filters ensure your generative AI application complies with usage policies and maintains safety.
For demo purposes, I’m keeping filter strength as High for all.
- Add denied topics. Even though this is an optional but a useful one to add.
Here I have adde below topics for medical-advice and financial advice.
Here I have given a context what type of questions/discussions can come from user and AI should avoid answering those.
- Add harmful words.
Next, we can also add Sensitive information e.g PII data passport-number, data of birth. That’s good thing. For demo purpose, I will be skipping this. But just remember there is provision from guardrail to add PII data.
Enable Grounding check. This is more about model responses are grounded in the reference source and relevant to user’s query to filter model hallucination. I kept it as default.
- Click Next, review information you added. If all good, hit Create Guardrail.
Once created, you can also test immediately in right pane.
You can see Guardrails has block action and you can also trace it :
Now, let’s try playground and bedrock agent with Guardrail attached and see what response we get.
Playground
As you can see in below screenshot, chat option gives message now respective advisors needs to be contacted.
Bedrock Agent.
As you can see in left bottom I have selected guard-rails and based on that model now give response to consult with advisor.
This demo demonstrates how financial and medical institutions can leverage Amazon Bedrock’s Guardrails to elevate their customer service operations, all while upholding compliance and reducing risks. As we have seen, guardrails ensure safe, compliant, and relevant interactions, showcasing the practical value of generative AI technology.
As AI continues to transform industries, it’s crucial not only for leaders but for all of us to explore similar use cases and prioritize building AI responsibly. By adopting solutions like Amazon Bedrock’s Guardrails, organizations can harness the power of AI while maintaining ethical standards and protecting their customers.
I hope you liked this short demo , thank you for your time.
Build and Use AI responsibly !!!!