Overview
Two years ago, I started a personal project with a big goal: creating a truly complete RSS client. I know what you're probably thinking—aren't there already thousands of RSS clients out there? It's true, but I believe none of them have yet delivered the ultimate user experience.
Of course, there are some fantastic tools in the realm of bookmark managers and RSS clients, like the impressive Grimoire project. There's also a wealth of other resources on GitHub’s Awesome Selfhosted list.
After much trial and error, I realized what I truly wanted from a manager:
- Self-hostable: No syncing across external platforms. I want my bookmarks secure and fully managed on my own server.
- Scalable: It must handle thousands of bookmarks with ease.
- Powerful search and tagging: With so many bookmarks, an efficient search and tagging system is essential.
- Comment and note support: I need the ability to add detailed notes or context to each bookmark.
- File over function: The ability to import/export in multiple formats is a must.
- Open Source: I want full transparency, and I aim to prevent the "enshittification" that often creeps into closed systems.
- Small footpring: I want it to run on Raspberry Pi, or small NAS
Looking at other RSS clients, I found that very few could meet my criteria. Many, in my opinion, fall short in features or flexibility.
Introducing Django-link-archive
I’ve developed most of these features in my project, Django-link-archive, which has become my primary tool for managing bookmarks. It’s transformed how I navigate content online—I control what I want to see and avoid the distractions pushed by social media algorithms.
Take a look if you’re interested:
Seeking Feedback
Now, I'm looking for feedback. Are there other requirements you’d expect from a robust RSS client or bookmark manager? Any features you find especially useful?
I've already received insightful ideas from the Reddit community. For example, I recently added a kiosk-like feature where the list of entries refreshes periodically. I also integrated jQuery, making interactions much more fluid.
Additional Projects
As I continued to work with RSS data, I was able to build out some related repositories, such as:
In some ways, this project has evolved into a simplified web crawler. I’ve added options for changing "browser" mechanisms in the backend to include requests, Selenium, and Crawlee. This setup is entirely configurable through a GUI, so I can assign specific crawling methods to particular domains—for instance, Spotify might require a full Selenium browser, while Crawlee performs better with other domains.
Maintaining this ecosystem solo has been a lot, and things do occasionally break. Still, I’m excited to share this with the community and hear your thoughts!
Thank you for reading, and I look forward to any feedback you may have.