Detecting Violent Content with AI: A Simple Image Analyzer Using APILayer & React

Precious Kelvin Nwaogu - Feb 17 - - Dev Community

๐Ÿš€ Introduction

With the rise of AI, analyzing images for violent content is now possible! I built a Violence Detection App using React.js, APILayer API, and Imgbb to help users identify potentially harmful images before sharing them online.

๐Ÿ‘‰ Live Demo

๐Ÿ‘‰ GitHub Repo


๐ŸŽฏ How It Works

1๏ธโƒฃ Upload an image (or use Imgbb to generate a URL).

2๏ธโƒฃ Analyze the image using the APILayer Violence Detection API.

3๏ธโƒฃ Get a detailed risk assessment based on AI analysis.


๐Ÿ’ก Risk Levels:

โœ… Safe (Very Unlikely or Unlikely to contain violence).

โš ๏ธ Needs Review (Possible violence detected).

๐Ÿšจ Flagged (Likely or Highly Likely to contain violence).

// Fetching image analysis result from APILayer
fetch(`https://api.apilayer.com/violence_detection/url?url=${imageUrl}`, {
  method: "GET",
  headers: {
    apikey: process.env.REACT_APP_API_KEY,
  },
})
  .then((response) => response.json())
  .then((data) => console.log(data));
Enter fullscreen mode Exit fullscreen mode

๐ŸŽจ Cool Features

โœ… Broken border design around analysis steps.
โœ… Animated "Go Back" button for smooth user experience.
โœ… Easy-to-use image upload system (Imgbb integration).
โœ… Professional UI/UX with real-time analysis results.


๐Ÿ–ฅ Building This Yourself?

๐Ÿ”น Fork the GitHub repo, add your APILayer API key, and deploy it!
๐Ÿ”น Feel free to improve or add features! Contributions welcome.


๐Ÿ”ฅ Final Thoughts

This project can be useful for social media platforms, parental control apps, and content moderation tools. AI-powered safety measures can help prevent exposure to harmful content online.

๐Ÿ’ฌ What do you think? Drop a comment if you have ideas for improvement! ๐Ÿš€

. . . . . . . . . . . . . . . .