Alex Karp, the CEO of controversial tech company Palantir, raised eyebrows during a recent live interview with the New York Times. In a viral video of the discussion, Karp defended his company to the Times’ Andrew Ross Sorkin, gesturing dramatically with his arms, bouncing up and down on his chair, and struggling to make his point.
Palantir’s X account shared the video on Sunday morning and announced Karp is launching The Neurodivergent Fellowship: “If you find yourself relating to [Karp] in this video — unable to sit still, or thinking faster than you can speak — we encourage you to apply.”
Palantir announced Karp himself would conduct final interviews for the fellowship. In a reply to the first message on X, the company included an application link to the fellowship, which is available in Palantir’s New York City and Washington, D.C. offices.
“The current LLM tech landscape positions [neurodivergent people] to dominate,” according to the application. “Pattern recognition. Non-linear thinking. Hyperfocus. The cognitive traits that make the neurodivergent different are precisely what make them exceptional in an AI-driven world.”
Palantir, a data and analytics company co-founded by conservative “kingmaker” Peter Thiel, was quick to argue that the fellowship is not a DEI initiative.
“Palantir is launching the Neurodivergent Fellowship as a recruitment pathway for exceptional neurodivergent talent,” according to the application, “This is not a diversity initiative. We believe neurodivergent individuals will have a competitive advantage as elite builders of the next technological era, and we’re hiring accordingly for all roles.”


What a load of bullshit, LLMs will be used in a million ways to sideline neurodivergent people in society whether it be BS AI “help” for a neurodivergent student replacing a human teacher or job applications using AI to illegally screen and filter out neurodivergent people, this is a bad decade for neurodivergent people and it is likely only to get worse as societies collapse into bigotry from the endless stresses and catastrophies of runaway climate change.
Right, there was legal pressure upon inputs of decision-making to make it more egalitarian or whatever. And by other criteria too.
So what happens is full obfuscation of inputs. In the form of LLMs.
Philosophically this is correct in my opinion, trees should be judged by their fruit.
A simplified comparison is British vs Prussian army philosophy, in Prussia, when evaluating officer’s performance, they’d judge his decision-making process and its inputs, even if the result was catastrophic, while in British army and navy they’d only judge the result, no matter how correct the decision-making. That has been often called unjust and not nuanced enough, but one way lost historically and the other won. For a reason. Judgement of inputs has more failure points. It causes degeneracy long-term.
A bit like every metric used as a KPI ceasing to be a useful metric, there’s such a commonly quoted MBA rule, except MBAs are not smart enough to remember that rule, generally.
The alternative to this is responsibility for all that happens downstream. No matter which inputs you get. In exchange for that you are allowed to have any decision-making process at all, just pay for it in full if something wrong happens.
We are being pressed by evolution (including technical progress) to adopt that approach, and it’s good, but it’ll take probably lots of wars and revolutions. People who hide malice behind formally correct inputs do resist. And they do hold power.
Instead of inputs you should treat any social mechanism as a black box, and both limit and judge its outputs. If they are outside limits, discard and punish. If they are inside limits, then evaluate and bill - in prison years or in fines or both. Or reward.
You never know all the inputs anyway and can’t tell if they are correct.