Being a Human Capitalist in a Failed Signaling World
AI is just a part of the dilemma.
When I first came across the ideas early in my economics education (discovered on my own before any formal training—this becomes relevant below), I was wholeheartedly a proponent of the human capital theory of education. The signaling theory made sense to me. Yet, I just “knew” it couldn’t be true because . . . I didn’t want it to be true.
I wanted to believe in a meritocracy in the strongest sense. As a high school student reading A LOT on my own and then a young college student, I naively wanted to believe everyone was at university to skill-up like I was.
It turns out economic reality (like all reality) doesn’t care what you want.
To be sure, there was some active human capital pursuit. And my belief in it was reinforced when I would encounter someone engaging in ideas (from argumentation in the classroom to the proverbial midnight bull session). But after the nth “will this be on the test?”, the scales fell from my eyes.
To be honest I didn’t fully grasp the nuance between the two explanations. Gary Becker’s work spoke to me in many ways (as it should to any good economics mind). Rational explanations for behavior matched up with my world view. Throw in his counter-conventional wisdom on things like criminal rationality and the economics of discrimination, and I was hooked. From this his human capital theory flowed naturally in my mind.
Ironically, I failed initially to see that human capital was a conventional-wisdom view that should have been stressed by a Beckerian take on rational actors and still not wrong.
The human capital model explains outcomes like wages and achievement a lot. It just doesn’t explain motivation or other behaviors of students. Nor does it explain the education-industrial complex we see around us. Signaling does.
It seems that people behave as if human capital is their motivation . . . as if guided by an invisible hand . . . all the while rationally self-directing to send the best signal they can under constraint.
I always thought (hoped) the best signal was “I know my sh*t” when it was actually “I’m smart enough, I’m conscientious, and I’m conformist”.
Society’s evolution to education maximalism has created a very inefficient world that sacrifices human capital while constantly ratcheting up the signaling requirements.
How does it sacrifice human capital? By diluting the value of signaling at every turn.
An efficient alternative to building human capital is buying the disguise of human capital. When it is cheaper to look the part than actually be the part, people act accordingly rationally taking the path of least resistance.
Credential inflation is an effect of this state of affairs. Grade inflation is a partial cause of it. Any avenue that allows cheating (in any form) is also a major cause.
As I recently wrote,
It is very difficult for a child to actually believe much less live the human capital model of education. It is very easy for a child to understand and attempt to hack the signaling model. “You only cheat yourself by cheating” means something to a human capitalist. It is nearly a complete lie in a world dominated by signaling.
Employers looking for a reliable signal of an employee’s human capital are ever-increasingly fighting through a sea of noise.
This is almost a market failure. Yet like so many apparent market failures, it is a failure stemming from government enabling good-but-hollow intentions. Bryan Caplan emphasizes that “government does the bad things that sound good.” Education maximalism is a case in point.
The fallacy is: Education is good; government can increase education; therefore, government can increase good by increasing education (diminishing returns ignored and incentive mismatch denied).
The upshot is human capital is crowded out.
These thoughts came to me while reading Paul Bloom’s recent post, “Why are so many professors conservative”.
He writes,
It’s interesting, then, that the only aspects of our lives where we abandon our bold thinking and our enthusiasm for radical changes are those that matter the most to us.
I think there are two reasons for this.
The first is from the historian Robert Conquest (His “First Law of Politics”).
“Generally speaking, everybody is reactionary on subjects he knows most about.”
…
I’ll put myself up as an example here. I have a lot of ideas about how to improve the Supreme Court, the training of medical students, and the making of Hollywood movies, but I have to concede that my enthusiasm for radical change might be in part because I don’t understand these institutions very well.3 Conquest would say that if I were better informed, I would have a clearer understanding of the value of the status quo and a better appreciation of the drawbacks of some of my radical ideas. I’d be more conservative.
…
The second reason we are so conservative is less respectable than Conquest’s First Law. It’s not about expertise; it’s about skin in the game. If you have a lot invested in a system, you won’t want it to change. Asking a prof about AI is like asking a taxi driver to weigh in on Uber. I think I have good reasons for my (conservative) defense of tenure, but you’d be forgiven for assuming that, having worked for and benefited from the protections of tenure, I don’t want them taken away. Part of professors’ unwillingness to give up on lectures is that they take a long time to prepare—once that time is invested, we don’t want to start anew. We certainly don’t want to transform the university in a way that risks making us obsolete.
It is a strong essay with implications beyond academia. Personally it hit home as I considered how easily I (and colleagues) reactionarily dismiss counter-conventional ideas in my world of investment management.
It should not come as a surprise that the history of successful disruption is quite commonly done by outsider attack. It is very hard to challenge what has worked. I saw it in my time in the newspaper industry and I see it today in money management.
Of course, it should be hard to challenge what has worked because IT HAS WORKED and there are likely unintended downsides. Chesterton’s fence is a useful guardrail. But some fences have to come down. And to stretch the analogy, not taking them down will simply fence you in.
Fixing education signaling in an AI world
Two things remain true:
Human capital is vital.
Signaling is essentially all we’ve got.
No one can fully know what someone actually knows and can actually do. Not even in ourselves. I have blind spots to my own capabilities and limits. I think I know things only to later learn them to be either outright untrue or, more often, more nuanced or complicated than I previously understood them to be.
Trial and error signals these limits, but trial and error can be a risky and expensive method. A benefit of signaling is found when it avoids trial and error to begin with.
Obviously, for a signal to be valuable, it must be reliable. Reliability requires truth in what it is supposedly signaling. For education this means it cannot crowd out human capital.
This is not a tautology. Sure the signal is supposed to signal knowledge (along with conscientiousness and conformity) making my crowding out concern seem like a contradiction in terms. However, as I said above, the signal can substitute for human capital itself by indicating, “I know how to look like I know the stuff,” rather than “I actually do know the stuff”.
Any tool or technique that can be coopted to substitute signal for human capital degrades the signal into becoming noise.
That brings us to AI and how it potentially pollutes this mess even further.
Borrowing from a great philosopher, AI is becoming like alcohol—the cause of and solution to all of life’s problems.
Arnold Kling has a lot of constructive thoughts on AI in education. Similar to my point about AI = alcohol, he wrote recently:
I keep coming across strong opinions about what AI will do to education. The enthusiasts claim that AI is a boon. The critics warn that AI is a disaster.
It occurs to me that there is a simple way to explain these extreme views. Your prediction about the effect of AI on education depends on whether you see teaching as an adversarial process or as a cooperative process. In an adversarial process, the student is resistant to learning, and the teacher needs to work against that. In a cooperative process, the student is curious and self-motivated, and the teacher is working with that.
If you make the adversarial assumption, you operate on the basis that students prefer not to put effort into learning. Your job is to overcome resistance. You try to convince them that learning will be less painful and more fun than they expect. You rely on motivational rewards and punishments. Soft rewards include praise. Hard rewards include grades.
If you make the cooperative assumption, you operate on the basis that students are curious and want to learn. Your job is to be their guide on their journey to obtain knowledge. You suggest the next milestone and provide helpful hints for how to reach it.
Under the adversarial assumption, AI is a disaster. By quickly finding and summarizing information, it deceives students into believing that they can learn without effort. By writing essays for students, it facilitates students deceiving teachers, making it more difficult for teachers to correctly reward student effort.
Under the cooperative assumption, AI is a godsend. Unlike a teacher with a class of a couple dozen students, an AI can act as a personal tutor. It can provide the student with exactly the information and exercises that he needs, given his current level of understanding.
AI significantly adds to human capital. AI can be (eventually soon if it is not already) a hyper step change. All of a sudden amateurs can access and put to use expert-level information.
Obviously, there can, is, and will be a disconnect between sounding like an expert and actually being one. So, here again is the risk that AI allows signal to substitute for human capital.
A large public library in a small city signals, perhaps falsely, that the city cares about and has knowledge. Most of the townsfolk might never go there, but the damage of false signal here is limited.
In the opposite direction, the benefit of a good signal is limited too. The library is available for those who wish to use it to build human capital. But how would one signal that? Prominently displaying one’s library card?
For a kid like me from the 1980s, having a set of encyclopedias or a personal computer in one’s home was a leg up on the competition. It made achievement in school easier, and that was human capital achievement. It led to good signals (e.g., high grades, graduation with honors, etc.). The cause and effect was properly ordered: human capital causing (giving rise to) signal rather than signal (falsely) existing absent human capital.
Obviously AI can be like that and all the more so. It can also be like when I got a Casio calculator watch sometime around the third or fourth grade. Why learn times tables when a machine can do that for you? Or so I thought. My math grades went up nonetheless—signal absent human capital.
Soooo, what is the AI fix to education signaling?
The bad news is education signaling is broken. The good news is AI is about to break that broken signal.
There are two very different ways it will break it:
Making it less relevant through diminishment
Making it less reliable through counterfeiting
Irrelevance refers to how the signal is shown to be the noise it actually is in many cases. This is the analogue to the calculator making good basic math skills irrelevant. It is a leveling effect across many domains. How do I know that you are actually good at XYZ, but more importantly why do I care if you are? You could be using AI to fool me, but you (and now many others) can use AI to do the work anyway. In this way it substitutes borrowed human capital for owned human capital.
There is also a counterintuitive angle to this. On the surface diminishing differences in human capital seems like it should diminish the theory that human capital explains education. On the contrary, what it diminishes is superfluous educational attainment. Why pursue a claim to have skilled up in something when that claim has greatly lessened value? The answer is you wouldn’t—at least not nearly as much as before. What is left is largely those who really have something to gain from the skill learning rather than the skill signal.
Relatedly there is unreliability due to counterfeiting. Here the examples are letters of recommendation and take-home essays among so many others. With irrelevance we have “that’s no ‘skill’ because everybody has it”. With counterfeiting we have the straightforward “I have no reason to believe this signal”.
The problem from signaling has been that it creates perverse incentives. Namely, an arms race emerges because the personal gains to enhancing the signal of greater human capital doesn’t actually imply greater human capital—at least not in a socially valuable sense. I want to signal my greater human capital; so I go to grad school. You do too and so follow suit as do many, many others. The marginal gain to actual human capital is less and less, and this is especially so when we consider how little this additional school adds to our ability to actually add value to show for it—we are largely the same workers with or without the grad degree (signal). Therefore, we over invest in education because we have to in order to win the rat race.
Exposure that the signal is busted (less relevant and less reliable) lets us call a truce. As we realize the signal is worth less than we thought (though still not worthless), we have less incentive to overinvest in the signal.
These are indirect ways AI can help fix the broken signal. There is also a way it will directly fix signaling: By becoming the evaluation of the signal.
AI as an educational tool isn’t limited to being a teacher. It can also be the grader. It can conceivably used to evaluate that someone knows their stuff.
Think of this as an extension of already existing tools and techniques. We’ve had IQ tests and other job-specific testing, pre-certification programs like licensure, interviews, background and reference checks, resumes, etc. Some of these we ignorantly ban from use like IQ tests generally. Others we allow to be compromised as tools of incumbent protection like occupational licensure. All of them suffer from inefficiencies.
AI potentially can serve in this role in a way only the most expert and trusted evaluation tool plus voucher could dream of. In fact it runs the risk of being too good at the job of signal evaluation.
Those other methods all have a risk of false positives (considering positives being an indication someone is qualified for the job). Hallucination risk aside, AI would likely be too scrutinizing generating false negatives if not used carefully. Properly applying AI as a tool of evaluation can efficiently and effectively get signaling closer to a Goldilocks’ level of just right.
Hope for human capitalists
These are exciting times for those like me who cherish human capital formation. Sure, we could demur and grow depressed as AI diminishes our hard-earned advantages. But this would be beyond bad form. We should celebrate these advancements even as it lowers our status.
More importantly the opportunities to build human capital are greater than ever.
And this offers a bit of hope for those continued proponents of the human capital model as being dominant—true believers and idealists alike. Through a process of clean up and destruction along with better evaluation, AI may take signaling down a peg in dominance. And as a magnificent new tool for human capital development, AI might helps us put to a finer tune useful human capital investment—helping to align the signal with the human capital we ultimately desire.
We may find a future with a little less wisdom in the old quote, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
Substacks referenced:


Your claim that signaling is “essentially all we’ve got” suggests the core problem isn’t eliminating signals but increasing their fidelity. If Ai becomes both a production tool and also an evaluation tool, the institutional design challenge is whether we can structure environments that distinguish assisted performance from internalized capability without reverting to crude proxies.
That may require shifting from static credentials toward higher-resolution demonstrations of reasoning under constraint; where process, transparency, and uncertainty are part of what’s evaluated. The question isn’t whether signals disappear , but whether we can redesign them to more closely track the underlying human capital they’re meant to represent.