๐ Introduction
With the rise of AI, analyzing images for violent content is now possible! I built a Violence Detection App using React.js, APILayer API, and Imgbb to help users identify potentially harmful images before sharing them online.
๐ Live Demo
๐ GitHub Repo
๐ฏ How It Works
1๏ธโฃ Upload an image (or use Imgbb to generate a URL).
2๏ธโฃ Analyze the image using the APILayer Violence Detection API.
3๏ธโฃ Get a detailed risk assessment based on AI analysis.
๐ก Risk Levels:
โ
Safe (Very Unlikely or Unlikely to contain violence).
โ ๏ธ Needs Review (Possible violence detected).
๐จ Flagged (Likely or Highly Likely to contain violence).
// Fetching image analysis result from APILayer
fetch(`https://api.apilayer.com/violence_detection/url?url=${imageUrl}`, {
method: "GET",
headers: {
apikey: process.env.REACT_APP_API_KEY,
},
})
.then((response) => response.json())
.then((data) => console.log(data));
๐จ Cool Features
โ
Broken border design around analysis steps.
โ
Animated "Go Back" button for smooth user experience.
โ
Easy-to-use image upload system (Imgbb integration).
โ
Professional UI/UX with real-time analysis results.
๐ฅ Building This Yourself?
๐น Fork the GitHub repo, add your APILayer API key, and deploy it!
๐น Feel free to improve or add features! Contributions welcome.
๐ฅ Final Thoughts
This project can be useful for social media platforms, parental control apps, and content moderation tools. AI-powered safety measures can help prevent exposure to harmful content online.
๐ฌ What do you think? Drop a comment if you have ideas for improvement! ๐