I believe that because Reddit is generally left-leaning and the majority of those users are opposed to AI, we may see a disproportionate rise in AI-generated right-wing content, which could influence public opinion. And the pentagon also showed interest in using LLMs to gaslight people.
U$ Democrats are right wing. Reddit is overwhelmingly right wing.
The political bias of AI will be set by those tuning the models. Now mix in a bunch of voters asking LLMs who they should vote for, because people will outsource their thinking any chance they get. The result is model owners being able to sway elections with very little effort.
Reddit doesn’t matter nearly as much as you think. It’s not going to move the needle appreciably.
Almost all English language text is liberal, meaning capitalist, very little is socialist, virtually none is communist, and quite a lot is anti-communist. So there’s your baked-in political bias for English language models.
I wonder if using a chinese language model and then translating it would get better or worse political content

Not 100% on the same topic, but in the software development world we are seeing this quite often. Because there is a bias for react and python on the original pull for ai, there is a deluge of new projects with those two stacks because people ask ai for software…and those two pop up.
In the same way, the talking points for ai will seem wierdly stick in a certain year/decade because that’s where most of the talking points were pulled from.
I believe that because Reddit is generally left-leaning and the majority of those users are opposed to AI, we may see a disproportionate rise in AI-generated right-wing content, which could influence public opinion. And the pentagon also showed interest in using LLMs to gaslight people.
We already see this: actual right wing political advertisements using AI and Twitter is full of the shit. It’s legitimately easier to trick conservatives with the slop.




