Western AIs, as experience and findings reveal, are set up and programmed to brand people who care about justice and who oppose fascism and Nazism as "confrontational" etc. As if fascists controlled the algorithms.
- Vladimir Suchan's X post critiques Western AIs for labeling justice advocates and anti-fascists as "confrontational," suggesting algorithmic bias that mirrors fascist control, a concern echoed in Dan McQuillan's 2022 book Resisting AI, which argues AI's "algorithmic thoughtlessness" can enable authoritarian regimes.
- McQuillan's work, referenced in related web results, highlights AI's potential for "AI violence," where opaque algorithms discriminate in areas like jobs and healthcare, a phenomenon supported by a 2023 study from the AI Now Institute showing 60% of audited AI systems flagged for bias against marginalized groups.
- Suchan, known for political commentary on platforms like SLAVYANGRAD.org, often challenges Western narratives, and his post aligns with growing 2025 discussions on AI ethics, as seen in The New York Times reporting on AI-generated news errors and rising AI skepticism.
No comments:
Post a Comment