Dangerous Minds

There is a worrysome growing trend on YouTube pointed at people like me producing content aimed at self-hosting. I have already been on the receiving end of this campaign, and now its getting much more public as YouTube goes after bigger channels.
Personally, I have already been a part of this fight since the early days of my YouTube channel. I am very careful when posting videos detailing media management to never show or instruct users on how to pirate copywritten content. I also never show copywritten content, as both of these things are in clear violation of community guidelines, and I understand that.
However I will tell you I have a permanent strike on my channel for showing users how to use the Jellyseerr container. Not how to download content, but purely what the container is, how to connect it to other containers, user management, notifications, and other basic functions. At no time did I ever demo using it to request content or download anything. This was deemed dangerous enough to the community to permanently strike my channel. I say permanently because I lost the appeal within hours. No feedback accepted, no second chances. Also FYI, these strikes remain even after removing the offending video.
I am not the only one going through this. Much bigger channels producing much safer content have already been hit. Jeff Geerling recently posted about his experience with YouTube and how if it wasn't for the size of his channel and public outcry, he would probably be in the same situation I am in. Jeff's violation was for a video that was up for over a year and had over a millions views.
Another video was made by Mental Outlaw describing this situation in a broader context:
What worries me is how YouTube, now being so centralized, is in control of what people view based on its own definition of what is "harmful". Is me getting away from Google in favor of hosting my own data harmful? According to Google-owned YouTube: yes. It doesn't take a lawyer to see the obvious conflict of interest there.
The bigger issue is how perception can be controlled by eliminating educational content based on unclear definitions of what is "dangerous" as well as how these decisions get made. Is YouTube's motivation for targeting this type of content driven by fear of lost revenue, competition to Google, or pressure from an outside source? It really doesn't matter. The more important question is: why am I not deciding what I want to view? There will always be content which is truly harmful to people and I commend YouTube for managing that so well, but when content which is purely educational is being labeled and removed for being dangerous, I begin to fear how much control and centralization we are giving away to a company with no interest in explaining itself.