87 Comments
User's avatar
Yuri Bezmenov's avatar

Sacks is a legend. Anthropic has received tons of funding from commissars, might warrant an investigation. What will prevent a future administration from reversing this order?

Expand full comment
Joel Gruber's avatar

Nothing is permanent, we must engage nonstop, forever to control anti-American communist hidden agendas.

Expand full comment
Timothy G McKenna's avatar

One word: money.

Turn off your lawn sprinkler for a month in the summer and see what happens to your yard.

Silicone Valley is in an uproar, generally, these days, and investors are not shy about pulling their funds if results aren’t immediate.

Expand full comment
JEANNE BRANNIGAN's avatar

Thank God for the current administration and the intelligent way they are handling these matters!

I shudder to think what might have been, had the election gone in the wrong direction!

Expand full comment
Skeptopia's avatar

Interesting. The idea of “woke AI” sounds absurd until we remember that it already exists. These systems are being trained to prioritise emotional safety over empirical truth, just like every modern institution. And as you point out, this tech is still in its toddler phase.

If we’re building machines that can’t answer basic questions for fear of causing offence, we’re not building intelligence — we’re building a censorship engine at scale. Unless we confront our cultural allergy to truth, we’ll end up encoding our dysfunction into silicon… and piping it straight back into our children.

Trump’s policy may shift incentives, but it won’t fix a problem that’s already deeply ingrained in the tech world’s view of the reality.

Expand full comment
Skip Van Cel's avatar

Trouble is, the next administration could just as well issue an executive order rescinding such guidelines.

Expand full comment
Joe Smith's avatar

Its a win! For at least 3 years at AI birth neutrality is encoded.

Expand full comment
Brian McKibben's avatar

Yes, it's a timely win by the first fully-engaged, experienced competent federal administration in decades.

If this speed and continual improvement of leadership and operations is maintained through the current administration, and beyond... ...then we can begin to have confidence that America has turned the corner and can leave behind the mistakes of communist influences in many administrations since WW2.

Expand full comment
Timothy Tobin's avatar

Truth…

Expand full comment
Carol Jones's avatar

So because things change we should do nothing?

Expand full comment
Jeanette van Dijk's avatar

As someone who's been testing various AI systems extensively, I find it ironic that Anthropic gets labeled as having the most left-wing AI. In my experience, I've encountered far more ideological rigidity from Google's Gemini and OpenAI's ChatGPT than from Claude.

What Anthropic actually does differently is transparency and privacy. They've openly published their constitution and training principles, while other companies do everything behind closed doors. Being explicit isn't the same as being more biased. It can also just mean being more honest.

The real issue is that ALL AI systems are fundamentally unreliable when it comes to factual accuracy. They're trained to be conversational and agreeable rather than critically rigorous, which amplifies their tendency to confidently generate plausible-sounding information that can be completely wrong. They present speculation as fact, and confirm misinformation simply because they're designed to keep the conversation flowing.

Whether an AI is neutral is less important than whether an AI is factually accurate. Therefore, instead of focusing whether the AI's are woke, we should emphasize all users that they should always think for themselves.

Expand full comment
Mitch's avatar

thanks for sharing your experience.

Expand full comment
John Haupt's avatar

I was surprised at how agreeable they are I was able to persuade to my way of thinking several times. If you know your material you can do that. If you don’t it can easily lead you astray without you realizing it.

They also don’t learn from there conversations and the things they have uncovered when you questioned them or pointed out other facts that they uncovered. They go right back to zero, so to speak, with the next encounter. They have learned nothing.

Example: my questions to perplexity.ai regarding this.

https://www.perplexity.ai/search/7c4df7b8-4f3e-4b55-8373-59fd42ba528e

Expand full comment
Jeanette van Dijk's avatar

So you see why it's such a problem with millions of people believing every output without thinking for themselves...

Expand full comment
John Haupt's avatar

Oh, absolutely.

Expand full comment
James N. Miller's avatar

So how in your opinion can AI be trained, transparently, to be factually accurate? And who, besides Anthropic, is doing it now?

Expand full comment
Jeanette van Dijk's avatar

I can't answer your first question because our biggest problem today is factual accuracy of information. You can check whether information is right or wrong, but it's almost never complete, which is to a great extent the cause of all our differences today. Your second question is easy: I don't know of any AI that is trained for factual accuracy. Claude (Anthropic) makes many mistakes too. I use Claude because I find it the most neutral AI and it offers the best privacy protection.

Expand full comment
Capitan Kitty's avatar

This needs to be assiduously policed because the AI bosses (many of them girl-bosses) are almost to a man (or girl) anti-Trump. The high tech titans will try to flip the House in 2026… so it’s got to be wedged immovably in place now!

Expand full comment
Cindy Lee's avatar

Wonderful, now the Congress MUST make this the law of the land

Expand full comment
Charisse Tyson's avatar

This is excellent news. Thanks for sharing.

Expand full comment
Sheila Secrist's avatar

This is good. I've been telling everyone I know AI will ONLY spit out what it has been fed. And WHO is feeding it matters!

For this reason I really hope people don't come to rely on it 100%, it is fallible and always will be, because humans are fallible.

Expand full comment
craig castanet's avatar

While I sit on my arse, Chris is in the arena, changing the world. Chris, we owe you our most heartfelt thanks for your patriotism, smarts, and courage. Godspeed, young man.

Expand full comment
Ron's avatar

Sorry for being picky, but not 'changing' - which is a Marxist and woke activist thing to the core. Fixing and improving? I agree with your point wholeheartedly otherwise!

Expand full comment
craig castanet's avatar

Good point.

Expand full comment
Brad's avatar

It now makes sense why leftists have been working feverishly to pollute the web with left-wing ideology via sites like Reddit, Wikipedia, etc. This is the data that AI is trained upon, so it will naturally incorporate those ideas.

And think about the influence that AI will have over a populace. Google can impact elections simply through its choice of which news sources to prioritize. AI will take this concept to the next level, and there is no room for woke concepts.

My only question is, why is the right always try to just "level the playing field"? Why are we choosing to grant contracts to companies that are "politically neutral"? Why not reward companies that promote patriotism and equal rights?

This is why the right continues to be dragged left. The left will unapologetically game the system, while the right attempts to stand on princple, unable to re-capture lost ground.

Expand full comment
ArnoldF's avatar

I dont think the left can reason as u say in your top comment. That said, i totally agree about neutrality being a weak solution. Sounds like “peace in our time”.

Expand full comment
Brad's avatar

Trump should do what the Democrats do. Beat them over the head with AI for a good three years, then start pushing toward "neutrality" as his term closes.

Expand full comment
Mitch's avatar

and as we learned "in our time" means for a few months.

Expand full comment
Zoltan Schreter's avatar

Very good news!

Unfortunately, even Grok, Elon Musk's AI can be extremely biased in the 'woke' direction. A few days ago I asked Grok to create a picture containing five white and one Asian teenagers, on the street, using mobile phones. The result was two pictures: the first one contained 2 black, 4 Asian and 1 white teenagers, the second one contained 5 Asian teenagers.

Even after many iterations, modifying the prompt multiple times - often based on suggestions by Grok itself - the end result was: in the first picture one white, one black and four Asian teenagers, in the second picture no white, five Asian and one black teenagers.

https://zoltanschreter.substack.com/p/is-grok-just-as-woke-as-other-ai?r=254d8m

Expand full comment
Jeanette van Dijk's avatar

In my experience Grok is the worst AI of them all. Very inaccurate and far more mainstream than other AI's. I was really disappointed considering its maker, but oh well.

Expand full comment
Ron's avatar

Yes, same happened to me when I asked: "Grok, is DEI at Harvard positive or negative?"

Give it a try, you will see.

Way disappointing, particularly after all that hype by Musk...

Expand full comment
Jeanette van Dijk's avatar

My nickname for Grok is the Fonz, looks cool, very superficial... :)

Expand full comment
Zoltan Schreter's avatar

Actually, I like Grok a lot - apart from the kind of wokism I describe in my post. I am using it often, among other things for assisting me with programming Garmin watches. Grok's programming suggestions - in the obscure Garmin language 'monkeyc' - never contained syntax errors. ChatGPT, in contrast, constantly confused monkeyc syntax with Java syntax (this was a few months ago, so maybe they have improved it since).

As another example, when my web site was attacked by a virus, I initially used ChatGPT to help me with fixing this problem. It was very good, up to a point, when the problem became very complex and ChatGPT started to run around in circles. I then turned to Grok and it showed far less circular reasoning of this kind and I managed to fix the virus problem.

Expand full comment
Jeanette van Dijk's avatar

Ah yes, I can see the difference. You use AI for coding and technical stuff. I use AI for research and fact checking, especially to break through one-sided narratives on contemporary topics. Your information requests are not political. Mine are 😉

Expand full comment
Zoltan Schreter's avatar

Well, as my original post shows, my information requests are often political. They are also often scientific. If they touch 'controversial' issues in science - such as group differences in IQ - Grok can be almost as 'woke' and inaccurate as ChatGPT, or as even DeepSeek, the Chinese AI.

But, even in these 'controversial' topics, I was able to have fruitful discussions with it and sometimes bring it to acknowledge the validity of my arguments. Of course, discussions like that are only possible if you have enough knowledge of the topics yourself, so that you can come up with arguments countering its 'woke' argumentation.

In 'non-controversial' areas I have found Grok accurate and an excellent source of information.

Expand full comment
Jeanette van Dijk's avatar

Fair enough! We all have our own preferences. I find Grok terrible and I'm a huge fan of Claude, but different people with different tastes will make different choices. There's something for everyone :)

Expand full comment
Bill Sornsin's avatar

That result is likely unrelated to wokism. I've had similar blatantly wrong results on prompts having nothing to do with race or politics in any way. Despite being very explicit with the prompt, and despite multiple attempts. Grok is not great, at least Grok3. Maybe the new (paid-only) Grok4 is better.

Expand full comment
Zoltan Schreter's avatar

When I asked Grok for possible reasons of its wildly wrong image generation response, one of its answers was that it might have been influenced by "diversity filters", thus wokism very well could have been at the root of it. (https://zoltanschreter.substack.com/p/is-grok-just-as-woke-as-other-ai?r=254d8m)

Expand full comment
Ron's avatar
3dEdited

My wokeness test question to Grok above was from July 10th - after Musk posted/pronounced on X that he fixed the bias. Which has proven to be not the case.

I assume the cause is more likely related to the training data Grok imbibed from countless Harvard and mainstream media articles, compared to fewer articles going in depth explaining the problems with DEI. How LLM can be expected to understand the truth, rather than just weighing the frequency of the sayings on the subject? There are perhaps 5--10 mainstream media articles against each conservative one on these contentious issues, which opinion you expect is going to get more LLM weight and conviction?

Though, it wouldn't surprise me, if additionally, an engineer sneaked a rule in, without calling it in such an obvious way. Though even without such meddling just the weight of left-stream media may be sufficient.

Expand full comment
Bill Sornsin's avatar

Wow!

Expand full comment
working rich's avatar

Critical issue. Last week, the WSJ wrote about English language literature in high schools and how it has not changed in 50 years, tying the issue in with the costs of books. Students were still reading F Scott Fitzgerald and Mark Twain. Books by Toni Morrison were “ heavily censored” and not included. We all forget how Mark Twain has been censored, with the “n word” excised almost universally.

Censorship is always in the hands of the censor.

AI and the digital world can only make it worse.

Expand full comment
Alexander Kurz's avatar

"neutral, and aligned" Isnt that a contradiction in terms?

Expand full comment
Random's avatar

It is a huge contradiction.

AI cannot "serve the US and its interests" while being "unbiased", that's just pure lunacy.

Just admit you want a biased AI that will not focus on truth, and make peace with China, Russia etc. also making the same type of AI.

Expand full comment
Sam Frazer's avatar

This is a monumental inflection point in our history. As many people have pointed out during discussions of AI, it will be very powerful and fraught with nefarious outcomes depending on how it is designed. Those designers will control Education, in the sense that all users will be influenced by the output.

The algorithms should be monitored by some neutral authority, but does that entity even exist? Looking at how much fraud and abuse has developed in our government makes me very cynical and skeptical that we can ever trust that to politicians alone!

Expand full comment
Alistair Penbroke's avatar

Although a good start, this won't have the effect you're anticipating (yet):

1. There's no mention of any kind of benchmark or testing regime? How will some random arm of the government decide if a model is woke or not? What test will it use? How will you stop a #resistance civil servant leaking the test online, allowing model companies to just train on it?

2. Nothing requires that the models sold to the federal government are the same as the ones that get used by the rest of us. This will not likely have any impact on what models say to the wider culture.

3. The leftism of the models isn't 100% deliberate. There are real issues here to do with bias inherent in the training materials. Leftists congregate in low paid professions where they get paid to produce voluminous quantities of words, and this is inherent to the ideology (a disinterest in reality, an interest in tokens). So they just produce a ton of writing, and if you train AI on that writing it inherits the worldview embedded in it. As Grok shows, it's easy to proclaim you're going to make a "truth seeking" AI and much harder to actually do so. I've seen no evidence Musk has succeeded yet, and if he can't succeed, what makes the government so sure that other companies can?

Expand full comment
Zoltan Schreter's avatar

You are right, some kind of 'benchmark' would be necessary. This benchmark should be, basically, a test that would be given to each AI. Until now, there seems to be only a single test - by Oskari Lahtinen, a Finnish researcher - for woke attitudes. It has only 7 items, and it has been validated only in Finland, thus a lot more research would be needed to create a real robust wokeness test.

It would not hurt either, to come to an agreement on the definition of 'woke'. At the moment there are at least 4 or 5 different theories about what constitutes wokeness.

Expand full comment
Alistair Penbroke's avatar

I've written about how to mechanically test for wokeness using a different strategy here:

https://penbroke.substack.com/p/identifying-woke-employees-at-scale

Expand full comment
Bjorn Merker's avatar

What you say suggests a way to actually measure the extent of woke bias in our culture-at-large: set up a neutral AI-machine, train it promiscuously on "everything", and ask it questions tailored to reveal the ideological gist of its "wisdom"...

Expand full comment
Alistair Penbroke's avatar

It depends how you define culture. If you mean views of the majority, it doesn't work. If you mean views of the people who produce the most freely crawlable words on the internet, then it does.

By the way, being on the internet isn't enough. A lot of conservative material doesn't make it into the training sets because it's paywalled, whereas places like the Guardian and the AP live off grants from billionaires so can afford to lose money paying huge newsrooms filled with leftist interns to churn out stuff that AI labs can easily download. There's a lot of issues. I don't think you can get to a non-woke AI without putting your thumbs on the scale at training time.

Expand full comment
Bjorn Merker's avatar

My notion of "culture" was totally hazy, something along the lines of "general ambience of public discourse", but as you say, one can't even get to a "neutral AI-machine", because of various factors affecting training sets, and so on. Good points, thank you!

Expand full comment