TL;DR
- Ancient DNA time-series show a ≈0.5 SD rise in polygenic scores for IQ/educational attainment since the Neolithic.
- Trends are convergent across West Eurasia, East Asia, and other regions, while alleles for neuroticism/depression declined.
- The Breeder’s Equation explains how weak per-generation selection compounds into sizable changes over 10 000+ years.
- Modern datasets reveal a recent reversal (negative selection on IQ alleles), proving human cognitive evolution is ongoing.
- Claims that “nothing has changed genetically in the human mind for 50 000 years” conflict with genomic and quantitative-genetic evidence.
⸻
Ancient DNA: Global Signals of Cognitive Selection (2023–2025)#
Recent ancient-genome studies confirm substantial directional selection on intelligence-linked traits throughout the Holocene. Using time-series polygenic scores (PGS) – indices of genetic propensity for complex traits – researchers have tracked allele frequency shifts across thousands of ancient individuals. The emerging picture is that alleles associated with higher cognitive abilities consistently rose in frequency over the last ~10,000–12,000 years in many human populations:
- Western Eurasia: A 2024 study of ~2,500 ancient genomes from Europe and the Near East found “positive directional selection for educational attainment (EA), IQ, and socioeconomic status (SES) traits over the past 12,000 years.”  Polygenic scores for EA and IQ increased markedly from the Upper Paleolithic through the Neolithic era , suggesting that the cognitive demands of early agriculture and urbanization exerted selection pressure on general intelligence. Intriguingly, polygenic scores for neuroticism and depression show a decline over time , likely because alleles predisposing to higher mental stability hitchhiked along with those boosting problem-solving ability (given genetic correlations between these traits). In other words, as the genes for higher cognition rose, genes linked to negative affect tended to be purged as a side effect .
- East Eurasia: Parallel results come from a 2025 analysis of 1,245 ancient genomes spanning Holocene Asia. It likewise “observed significant temporal trends” with positive selection on cognitive traits – notably alleles for higher IQ and EA – across Eastern Eurasian prehistory . The same study found these trends were robust even after controlling for demographic shifts (using admixture and geographic covariates) . Interestingly, it reported that alleles associated with autism spectrum traits rose (potentially reflecting enhanced systemizing or attention to detail), whereas those for anxiety and depression fell, mirroring the European pattern . Selection on height was more context-dependent, varying nonlinearly with climate  – but the consistent increase in educational/IQ-linked variants suggests a broad, convergent evolutionary response in societies undergoing Neolithic transitions.
- Europe (mainstream replication): A 2022 study by Kuijpers et al. assembled genome-wide PGS for various traits in ancient Europeans, corroborating that “after the Neolithic, European populations experienced an increase in height and intelligence scores,” along with decreases in skin pigmentation . This Frontiers in Genetics study used GWAS-based polygenic indices for intelligence and found a sustained upward trend in cognitive potential from ~8,000 years ago onward . Notably, this aligns with the archaeological record: the Neolithic Revolution and subsequent societal complexity created new niches where general cognitive ability (GCA) was highly rewarded.
- Direct time-series selection scans: In late 2024, a team led by Akbari et al. (including David Reich) introduced a powerful ancient-DNA selection test looking for consistent allele frequency trends over time. Applying it to 8,433 ancient West Eurasians (14,000–1,000 BP), they identified an “order of magnitude more” selection signals than prior methods – some 347 loci with >99% posterior probability of selection  . Alongside classic adaptations (e.g. lactase persistence), they found polygenic evidence of directional selection on cognition-related traits. In particular, the authors report “combinations of alleles that are today associated with… increased measures related to cognitive performance (scores on intelligence tests, household income, and years of schooling)” underwent coordinated rise in frequency over the Holocene . For example, alleles improving educational attainment appear to have been driven upward by strong selection in West Eurasians, especially after ~5,000 BP . These findings strengthen earlier hints from ancient DNA that our ancestors experienced ongoing genetic upgrades to learning and problem-solving abilities – even if the exact historic phenotype (e.g. better memory, innovation, or social cognition) is inferred indirectly .
- Brain size and related traits: It’s worth noting that selection on intelligence does not always mean selection on brain volume per se. Paradoxically, the human brain has slightly decreased in size since the Late Pleistocene. Polygenic trend analysis confirms a mild decline in genetic propensity for larger intracranial volume (ICV) from the Upper Paleolithic into recent millennia . This likely reflects energy trade-offs or self-domestication (smaller, more efficient brains) rather than a cognitive downgrade. Indeed, the ICV PGS in Europe shows only a small negative correlation with age (r ≈ –0.08 over 12k years) and no sharp shifts   – consistent with fossil data showing a ~10% reduction in average cranial capacity from Ice Age hunter-gatherers to modern humans . In short, our brains may have gotten slightly smaller but more “wiring optimized,” while the genetic dial on cognitive ability still moved upward via other pathways (e.g. synaptic plasticity, neurotransmission genes, frontal cortex development, etc.). Tellingly, some of the strongest Holocene sweeps have been at loci linked to neural development. For instance, the X-chromosome shows evidence of dramatic selective sweeps in the last ~50–60k years near genes like TENM1, which is implicated in brain connectivity; researchers speculate this could reflect adaptation in faculties like language (phonological recursion) or social cognition in Homo sapiens after diverging from archaic humans . In sum, ancient DNA delivers a resounding refutation to the idea that “nothing has changed” genetically in the human mind – on the contrary, many small allelic adjustments accumulated to produce non-trivial shifts in our species’ cognitive toolkit over the Holocene.
The “No Cognitive Evolution” Counterarguments (and Why They Fail)#
It has long been an anthropological article of faith that human cognition reached a peak of behavioral modernity by ~50,000 years ago, with no further meaningful biological change since. Stephen Jay Gould famously claimed that “there’s been no biological change in humans in 40,000 or 50,000 years. Everything we call culture and civilization we’ve built with the same body and brain.”  Similarly, cognitive scientist David Deutsch recently asserted that prehistoric people were “our equals in mental capacity; the difference is purely cultural.” This blank-slate catechism – the notion that evolution miraculously halted for the human brain while continuing apace for traits like disease resistance or pigmentation – is now directly contradicted by evidence. Let’s examine the major counterarguments and why they no longer hold up:
- “Humans haven’t had time to evolve cognitively; 50k years is too short.” This argument misjudges the power of even weak selection over many generations. As a thought experiment, consider that a sustained selection differential of just +1 IQ point per generation (well below the noise of IQ tests) would, with heritability ~0.5, shift the mean by ~+0.5 IQ per generation. In 400 generations (≈10,000 years), that’s +200 IQ points – obviously an absurd extrapolation . The point is that unless selection was literally zero in every generation (an extremely unlikely coincidence), even a tiny persistent pressure could produce significant change over tens of millennia. Those who insist “no change since the Pleistocene” are essentially asserting that for 2,000+ generations, intelligence conferred no reproductive advantage whatsoever. To put it bluntly, the only world in which our ancestors’ brains froze 50k years ago is one where intelligence provided zero fitness benefit – a world that no hunter-gatherer or farmer would recognize . Realistically, higher cognitive ability helps humans solve problems, acquire resources, and navigate social complexities; it’s implausible that such a trait would be evolutionarily neutral across all environments. The ancient-genome results (Section 1) decisively show it was not neutral, but under positive selection whenever socioecological challenges rewarded learning, planning, and innovation.
- “Any differences in cognitive outcomes are due to culture, not genes.” Cultural evolutionists rightly emphasize that cumulative culture can dramatically raise human performance without genetic change – for instance, widespread schooling can increase knowledge and test scores across a population (the Flynn effect). However, cultural and genetic evolution are not mutually exclusive; in fact they often cooperate. Culture can create new selection pressures: e.g. dairy farming culture selected for lactase genes , and similarly, the move to complex agrarian society likely selected for genes aiding abstract thinking, self-control, and long-term planning  . As anthropologist Joseph Henrich notes, “genetic evolution has been speeding up for the last 10,000 years… responding to a culturally constructed environment.”   Our genomes adapted to agriculture, high population density, new diets and diseases – why would they not also adapt to new cognitive demands of those environments? Culture buffers some selective pressures but amplifies others (for example, the value of numeracy and literacy in complex societies creates a fitness edge for those who learn quickly). Indeed, gene–culture coevolution theory predicts traits like general intelligence would continue to evolve in response to novel challenges  . The empirical data now vindicate this: populations with longstanding traditions of dense, technologically advanced societies show higher frequencies of alleles linked to educational attainment than recently contacted hunter-gatherers . Culture and genes climbed the ladder together – a feedback loop, not an either/or.
- “Prehistoric peoples were just as smart – look at ancient creativity and tools.” There’s no doubt that humans 40,000 years ago were intelligent in an absolute sense (they were biologically Homo sapiens after all). But the scientific question is one of averages and incremental changes, not a binary “smart vs dumb.” Critics often point to early symbolic artifacts (e.g. ochre pigments, beads from ~100kya) as proof that cognitive sophistication was present well before 50kya. However, these isolated finds are debated – many archaeologists see them as tenuous precursors, with truly explosive innovation (cave art, sculpture, complex tools) only appearing in the Upper Paleolithic (~50–40kya) . That pattern hints at a threshold event – possibly a biological cognitive upgrade (sometimes hypothesized as a genetic mutation affecting brain wiring or language). If so, then it is actually a case for recent evolution: some heritable change might have enabled the “Great Leap Forward” in culture. More generally, while ancient individuals were certainly capable, it doesn’t follow that all populations at all times had identical genetic potential. Evolution doesn’t stop at a finish line of “modern human behavior.” For example, it’s telling that the earliest civilizations and written languages arose in certain regions (Fertile Crescent, Yellow River, etc.) after millennia of agriculture – precisely the populations our genetic data show had the strongest selection for EA/IQ alleles . This doesn’t mean those early farmers were inherently smarter than foragers elsewhere – it means they had started to become a little smarter through evolution in tandem with their cultural head start. We now have ancient DNA time-transects showing cognitive PGS remained flat in purely hunter-gatherer populations for millennia, but began climbing once agriculture and state-level societies emerged  . In essence, human cognitive evolution continued, modestly but measurably, wherever cultural complexity ramped up.
- “Brain size actually shrank; doesn’t that imply less intelligence?” It’s true that average brain volume of Homo sapiens today (~1350 cc) is below that of Upper Paleolithic people (~1500 cc). Some anthropologists argue this indicates a self-domestication process making us more docile and maybe dumber (comparing us to domesticated animals with smaller brains than their wild counterparts). Yet brain size is only loosely correlated with IQ (within modern humans, the correlation is ~0.3–0.4). The quality and organization of neural circuits matter more. It’s quite plausible our brains became leaner but more efficient – perhaps reflecting a shift from raw visuospatial prowess to more specialized cortical networks for complex cognition. The genetic evidence supports this interpretation: despite a slight Holocene decrease in cranial volume, alleles enhancing cognitive function were rising. For instance, one ancient-genome scan notes that numerous brain-development genes (beyond just head size regulators) were under selection  . We might liken it to computer chips: our “hardware” got smaller in some respects, but our “software” (neural connectivity and neurotransmitter tuning) got an upgrade. Additionally, a smaller brain within a domesticated, cooperative context might reduce energy use and birthing risks while social intelligence increases. In any case, the modest reduction in ICV PGS (on the order of 0.1 SD over 10,000 years  ) has clearly not precluded rising cognitive abilities. It’s a nuanced evolutionary trade-off, not simply a decline. (And as a tongue-in-cheek counter: if one truly thinks we’ve all gotten dumber since the Ice Age, one then has to admit the brain was subject to genetic change – undermining the core “no evolution” claim to begin with.)
- “Differential outcomes today are entirely environmental, so genetics can’t be involved.” This argument often stems from a laudable caution against genetic determinism, but it confuses current variation with historical change. Yes, the fact that (for example) literacy rates differ due to schooling access says nothing about whether genes changed over centuries. One can fully acknowledge the massive role of environment (the Flynn effect has raised IQ scores >2 SD in many countries via education, nutrition, etc.) while also recognizing underlying gene-frequency trends. In fact, modern observations present a stark warning: phenotype and genotype can move in opposite directions. Case in point – in the 20th century, measured IQ in developed nations rose (Flynn effect) even as genetic selection was against higher IQ (due to differential fertility). One recent analysis of U.S. health and retirement data estimates that genetic selection lowered the population’s cognitive polygenic score by about 0.04 SD per generation in the mid-20th century  – roughly equivalent to –0.6 IQ points per generation being lost to dysgenic trends, even as actual test scores climbed thanks to environmental improvements  . In other words, culture can mask or outweigh genetics in the short run. But over hundreds of generations, if selection consistently favors or disfavors certain alleles, the genetic signal will eventually shine through. Dismissing long-term evolution by pointing to short-term environmental effects is a non-sequitur. Both factors have been at work: environment shapes the expression of intelligence, while evolution slowly but surely shaped the distribution of intelligence-favoring genes.
In summary, the entrenched anthropological stance that “nothing has changed since the Stone Age” is untenable in light of modern evidence. This stance persisted more as an ideological commitment to human equality and exceptionalism than a testable hypothesis – it was, as one commentator put it, “surviving by decree, not data.”  Today, we have the data. Ancient genomes, selection scans, and quantitative genetics have converged to reveal that human cognitive evolution did continue into the Holocene and even the historical era. The changes were incremental, not turning our ancestors into idiots (they were clearly intelligent enough to survive and innovate), but they were directional – refuting the idea of a flat intellectual landscape frozen in time.
The Breeder’s Equation, Thresholds, and Long-Term Change#
The Breeder’s Equation from quantitative genetics provides a simple lens to quantify how much evolutionary change we expect in a trait under selection. It states:
[\Delta Z = h^2 , S]
where ΔZ is the change in the trait mean per generation, h² is the trait’s heritability, and S is the selection differential (the difference in the trait mean between reproducing individuals and the overall population). This elegant formula – essentially a one-step forecast of response to selection – has some profound implications when extended over many generations, especially for a highly polygenic trait like intelligence.
Let’s unpack it in the context of human cognitive traits:
- Even weak selection can have big effects given enough time. Suppose a population has a very modest positive selection differential on intelligence – say parents are on average just 0.1 SD (about 1.5 IQ points) above the population mean. Even with moderate heritability of 0.5, each generation the mean IQ would shift by ΔZ = 0.5 * 0.1 = 0.05 SD (~0.75 IQ points). That seems negligible – barely perceptible in one generation. But compound it over 100 generations (≈2,500 years): if the environment and selection regime stayed roughly consistent, you’d accumulate ~5 SD of change (0.05 * 100) – i.e. a 75-point IQ increase!  Of course, in reality selection strengths ebb and flow; there could also be trade-offs limiting indefinite change. But the core insight is that evolutionary inertia is a myth – small directional pushes, if sustained, lead to very large outcomes. Our 50,000-year timeframe encompasses ~2,000 human generations. It is plenty of time for significant cognitive evolution, even under gentle selective pressures.
- Reversed selection can likewise erode gains. The same math applies in the opposite direction. As noted above, in the 20th century the selection differential on education/IQ turned negative in many societies (due to a combination of factors like lower fertility of high-education individuals). Estimates from genomic data in the U.S. suggest S ≈ –0.1 SD for EA in recent generations , which implies ΔZ ≈ –0.05 SD per generation genotypically. Over just 10 generations (~250 years), that would cumulate to a –0.5 SD change, undoing perhaps ~7 or 8 IQ points of genetic potential. This is not merely hypothetical – it’s a trajectory we are empirically on . The Breeder’s Equation thus cuts both ways: it predicts not only the rapid rise of a trait under positive selection, but also its decline under relaxation or reversal of selection. This duality is crucial for interpreting the past. If one argues that no cognitive evolution occurred in prehistory, one implicitly requires that selection was perfectly zero or symmetrically fluctuating to cancel out over thousands of generations – an extraordinary coincidence. Given how quickly we can already detect genetic IQ declines in the last two generations, it would be special pleading to assume prehistoric selection never once tilted positive for IQ . On the contrary, it likely tilted positive often (e.g. when smarter individuals better survived hard times or achieved higher status in stratified societies), yielding the upward genetic trend now recorded in ancient DNA.
- Threshold models and non-linear jumps: One nuance often raised is that some cognitive abilities might behave like threshold traits – you either have “enough” of some neural circuitry to support a capability or you don’t. Language is a classic example argued in this vein: maybe incremental increases in general intelligence do little until a threshold is crossed that enables syntactic recursion or true symbolic thought, at which point the phenotype qualitatively shifts (a “phase change”). If such thresholds exist, selection can have non-linear effects. A population might see relatively little apparent change for generations, then a sudden flowering of new behaviors once genetic accumulation pushes the trait past the critical point. The archaeological “sapient paradox” – the gap between anatomically modern humans ~200kya and the cultural explosion ~50kya – could reflect this dynamic . A +5 SD shift in a threshold cognitive trait is not just “more of the same” – it can mean the difference between no written language and spontaneous invention of writing systems, or between Stone Age stagnation and an Industrial Revolution. This perspective rebuts the claim that a few standard deviations of genetic change are irrelevant. In fact, a calculated +0.5 SD rise in cognitive PGS since the early Holocene , if mapped onto certain underlying capacities, might have been the difference between a world with only sparse farming villages and a world teeming with civilizations. In short, small genetic changes can prime big cultural breakthroughs once thresholds are passed. Human evolution is likely a mix of gradual trends and these tipping-point events.
- Lande’s multivariate version – correlated responses: The Breeder’s Equation generalizes to multiple traits via Lande’s equation, (\Delta \mathbf{z} = \mathbf{G} \boldsymbol{\beta}), where G is the genetic covariance matrix and β is the vector of selection gradients on each trait. The key takeaway is that you can get a response in trait Z without directly selecting on Z at all, if Z is genetically correlated with some trait X that is under selection. Apply this to intelligence: even if our ancestors weren’t explicitly “trying” to get smarter, selection on proxies or correlates could have done it indirectly. For example, consider social status or wealth in a complex society. If higher-IQ individuals tended (on average) to attain higher status or accumulate more resources, and those individuals had more offspring, then genes for intelligence would get dragged along by selection on social success. This is essentially Gregory Clark’s thesis in A Farewell to Alms (2007) – that in medieval England the economically successful (who were, in his argument, more prudent, educated, and perhaps cognitively adept) outreproduced the poor, gradually shifting the population’s traits. We now have genetic evidence supporting this kind of correlated response: in a recent analysis of ancient genomes from England (1000–1850 CE), polygenic scores for educational attainment significantly increased over those centuries, implying genetic selection favoring the traits that made one successful in that society . Importantly, it’s not that medieval peasants sat around selecting mates for IQ; rather, selection operated via life outcomes (literacy, wealth, fecundity), which happened to be genetically correlated with cognitive ability. Similarly, selection for disease resistance or other fitness traits could have incidental cognitive effects. (There is evidence, for instance, that schizophrenia-risk alleles may have been selected against because they reduce overall biological fitness, and since those overlap genetically with cognitive function, their removal nudges average cognitive ability up  .) In evolutionary genetics, every trait connected in the web can move if any part of the web is tugged. Human intelligence genes did not evolve in isolation; they rode the coattails of many selective forces – from climate adaptation to sexual selection for certain personalities – all filtered through the genetic covariance structure. The end result was a steady march in our cognitive-polygenic index, even if “make brains smarter” was never the sole target of selection.
To ground this in numbers, consider what the ancient DNA is telling us. Polygenic scores for cognitive ability (using GWAS hits for IQ/EA) have risen on the order of 0.5 standard deviations from the early Holocene to today . If one assumes (generously) that these scores explain, say, ~10% of variance in the actual trait, a 0.5 SD genotypic increase might translate to a ~0.16 SD phenotypic increase (a rough approximation, since the true predictive power of current GWAS hits for IQ is in that ballpark). 0.16 SD is about 2.4 IQ points. Not huge – but that’s per 10,000 years. Over 50,000 years, if the trend was consistent, it could be on the order of 12 IQ points. Interestingly, some paleoanthropologists have speculated that Upper Paleolithic humans (who left behind relatively simplistic tools) might indeed have had somewhat lower average cognitive capacity for symbolic reasoning than later Holocene humans – not a difference you’d notice in daily survival skills, but enough to matter for the rate of innovation. Whether that specific magnitude is accurate or not, the Breeder’s Equation assures us that large cumulative changes are plausible under small steady selection, and the ancient-DNA data now confirm a trajectory broadly in line with theoretical expectations (e.g. an S on the order of 0.2 IQ points per generation would neatly explain the genome-wide shifts we observe over ~400 generations ).
Modern Selection Trends and Their Historical Implications#
The study of ongoing evolution in contemporary humans provides a sobering counterpoint – and a clue to past regimes. In the late 20th and early 21st century, most industrialized populations underwent a reversal of selection on cognitive traits. With contraception, improved child survival, and shifts in values, the previous positive correlation between intelligence and fertility flipped negative. For example, a comprehensive meta-analysis by Lynn (1996) found an average IQ–fertility correlation around –0.2 across dozens of datasets, implying about –0.8 IQ points of selection per generation against g . More direct genomic approaches back this up: Hugh-Jones and colleagues (2024) examined actual polygenic scores in U.S. families and reported that “scores which correlate positively with education are being selected against,” leading to an estimated genetic change of –0.055 SD per generation in cognitive ability . This translates to roughly –0.6 IQ points lost genetically each generation . Crucially, these findings come from a period of unprecedented medical and social support – a relaxed selective environment by historical standards. Yet even in this comfortable context, natural selection on the genomic level did not vanish; it simply took a different turn (favoring traits associated with earlier childbearing and lower educational attainment) .
Why does this matter for the past? Because it demonstrates that human populations are never truly at an evolutionarily neutral equilibrium. Selection is always happening in some form, even if modern society obscures its effects with technology. If in the easiest era of human existence we can measure a directional genetic change in one century, how much more forceful might selection have been in harsher eras? Historically, high intelligence may have been a double-edged sword: it could aid resource acquisition (boosting fitness) but also, in certain contexts, come with trade-offs (perhaps a slight propensity toward neurological or psychiatric issues). In pre-modern times, however, the balance seems to have favored higher cognition more often than not:
- Historical positive selection (the case of “breeding for brains”): Many scholars have pointed to the demographic patterns in agrarian societies where the upper classes – often possessing greater access to nutrition, education, and perhaps with higher average intellects – had more surviving offspring than the lower classes. Gregory Clark’s analysis of English family lineages (from wills and records) showed that the economically successful in medieval England had about 2× the number of surviving children as the poor, leading to a slow dissemination of “middle-class” genes into the general population. Traits under selection in that model included literacy, foresight, patience, and cognitively linked dispositions (what Clark dubbed “upper tail human capital”). Genetic data now bolster this narrative. A recent ancient DNA study specifically tested Clark’s hypothesis by looking at polygenic scores in remains from medieval and early-modern England. The results: a “statistically significant positive time trend in educational attainment polygenic scores” from 1000 CE to 1800 CE . The magnitude of increase in those genotypic scores, though modest, is “large enough to serve as a contributory factor to the Industrial Revolution.”  In plainer terms, the English population genetically inched upward in traits conducive to learning and innovation, which may help explain why that population was primed for an unprecedented economic/cultural explosion by the 18th century. This is a powerful vindication of the idea that natural selection didn’t stop at the Paleolithic – it was shaping cognitive capacities right into the Early Modern period.
Polygenic scores (PGS) for cognitive and social traits in medieval vs. contemporary English genomes. The yellow boxes (modern samples) sit consistently higher than the purple (medieval) for Educational Attainment (EA) indices and IQ, indicating a genetic shift favoring these traits over the past ~800 years . Such findings empirically support theories that modest selection in historical societies accumulated to appreciable differences. 
- Gene-culture “pendulum swings”: The pattern over time might be cyclical or environment-contingent. In extremely difficult conditions (e.g. Ice Age tundra or pioneering farmer communities), survival may have depended more heavily on general intelligence – the ability to invent new tools, remember food locations, or plan for winter – so selection on IQ was strong. In more stable prosperous periods, other factors (like social alliances or physical health) might matter more, diluting selection on IQ. Fast-forward to the post-industrial era, and we see a scenario where education-intensive lifestyles actually correlate with lower reproductive output (for sociocultural reasons), flipping selection negative. What this suggests is that the direction of selection on cognitive traits has not been uniform across time or space, but the overall long-term trend was upward, because during the long sweep of prehistory and early history, each innovation or environmental challenge created new advantages for bigger brains or better minds. By the time we reach the modern era, we are in a novel environment (easy survival, conscious family planning) where that trend has reversed. If we think in terms of the Breeder’s Equation over the entire 50,000-year span, the first ~49,000 years contributed a lot of small positive ΔZ’s, and the last few centuries might be contributing a small negative ΔZ. The net sum is still positive in favor of higher intelligence compared to the Paleolithic baseline.
- Modern genetic load vs. past optimization: Another angle is to consider mutational load and selection’s role in purging deleterious variants. The human genome accumulates new mutations each generation, many of which are neutral or mildly harmful. Some fraction likely affects neurodevelopment negatively. In high-mortality, high-selection environments of the past, individuals with heavier loads of deleterious mutations (including those hurting brain function) may have been less likely to survive or reproduce, thereby keeping the population’s genetic “quality” for intelligence high. In modern populations, relaxed selection allows more mutational burden to persist (a hypothesis to explain rising prevalence of certain disorders). This could mean that ancient groups were genetically more optimized for a tough world – ironically more “fit” in a Darwinian sense – whereas today we carry more weakly deleterious alleles (which could subtly impair average cognitive potential). Genomic studies have indeed found signals consistent with purifying selection acting on intelligence-related genes in the past (e.g. alleles that reduce cognitive function tend to be at low frequency, as expected if selection weeded them out). This perspective underscores that evolutionary pressures were likely shoring up our cognitive architecture throughout prehistory, removing the worst mutations and occasionally favoring new beneficial ones. Our current era, conversely, may be tolerating a growing burden that selection used to constrain. The implication is that prehistoric humans might have been closer to their theoretical genetic potential for intelligence than we are beginning to be under relaxed conditions – a reversal that only further highlights how unnatural the “selection = 0” assumption is.
Integrating all lines of evidence: human intelligence has been and remains a moving target. Ancient DNA confirms the rise of cognitive polygenic scores over thousands of years , while modern data document a recent fall . Both trends are relatively slight per generation – a few tenths of a percent change – yet across deep time they add up decisively. It is frankly astonishing that some still claim our minds exist in an evolutionary stasis bubble, immune to the forces that have shaped every other aspect of life. The reality is that we are very much a product of those forces. Our species’ rapid cultural progress over the last 50 millennia was not a purely cultural phenomenon occurring on a genetically unaltered substrate; it was a coevolutionary march. Each advance altered our selective landscape, to which our genomes then slowly adapted, enabling further advances, and so on.
As of 2025, the verdict of population genetics, ancient genomics, and quantitative biology is in: human cognitive traits did evolve measurably in the recent evolutionary past. The “blank slate” view, which treated the human brain as a constant since the Upper Paleolithic, turns out to be a polite fiction – one that may have been politically comforting, but not scientifically correct  . Intelligence, like any other complex trait, responded to selection. The Breeder’s Equation taught us theoretically that 50,000 years is plenty of time for change; now ancient DNA has shown us empirically that such change happened. In a sense, this should not be surprising – it would have been far more surprising if a trait as fitness-relevant as cognitive ability did not undergo directional selection when early humans faced new challenges (from Ice Age climates to agricultural living).
What does this mean for us today? One implication is that human variation in cognitive abilities (among individuals and populations) likely has some evolutionary-history signal, and not solely recent environment, behind it – a topic of great sensitivity, but one that must be approached with honesty and nuance. Another implication is that our species’ remarkable achievements – art, science, civilization – were built on a slowly shifting genetic canvas. Had we remained with the exact same “body and brain” of 50,000 years ago , it’s debatable whether the scale of modern civilization would have been possible. And looking ahead, as selection pressures now change (or even reverse), we must consider the long-term genetic trajectory of traits we care about. Will the future human be genetically less inclined toward abstract intelligence if current trends continue, and if so, how might society compensate? These are no longer questions of idle speculation, but informed by real data.
To conclude on a “Straussian” note: Recognizing that human cognitive evolution is ongoing (and has been recent) should not be unsettling – it is an affirmation of our place in nature’s tapestry. Far from diminishing human dignity, it enriches our story: our ancestors were not static placeholders for us, they were active participants in shaping what humanity would become, via both culture and genes. The reality check of the past 50,000 years is that evolution didn’t stop when culture started. Humans made culture, culture made evolution, and the dance continues. The blank slate is out; the Number (or rather, the polygenic score) is in . We are still evolving – and yes, that includes our brains.
Sources#
- Akbari, A. et al. (2024). “Pervasive findings of directional selection…ancient DNA…human adaptation.” (bioRxiv preprint) – Evidence of >300 loci under selection in West Eurasians, including polygenic shifts in cognitive performance traits  .
- Piffer, D. & Kirkegaard, E. (2024). “Evolutionary Trends of Polygenic Scores in European Populations from the Paleolithic to Modern Times.” Twin Res. Hum. Genet. 27(1):30-49 – Reports rising PGS for IQ, EA, SES over 12kyr in Europe; cognitive scores +0.5 SD since Neolithic  , along with declines in neuroticism/depression PGS due to genetic correlation with intelligence .
- Piffer, D. (2025). “Directional Selection…in Eastern Eurasia: Insights from Ancient DNA.” Twin Res. Hum. Genet. 28(1):1-20 – Finds parallel selection patterns in Asian populations: IQ and EA PGS increasing through the Holocene , negative selection on schizophrenia/anxiety, positive on autism (consistent with European results).
- Kuijpers, Y. et al. (2022). “Evolutionary trajectories of complex traits in European populations of modern humans.” Front. Genet. 13:833190 – Uses ancient genomes to show post-Neolithic increase in genetic height and intelligence, confirming continued selection on these polygenic traits .
- Hugh-Jones, D. & Edwards, T. (2024). “Natural Selection Across Three Generations of Americans.” Behav. Genet. 54(5):405-415 – Documents ongoing negative selection against EA/IQ alleles in the 20th-century US, estimating ~0.039 SD per generation decline in phenotypic IQ potential .
- Discover Magazine (2022) on Gould’s quote: “Human Evolution in the Modern Age” by A. Hurt – Cites Gould’s “no change in 50,000 years” claim and notes that most evolutionary biologists now disagree, pointing to examples of recent human adaptation .
- Henrich, J. (2021). Interview in Conversations with Tyler – Discusses cultural evolution and acknowledges gene-culture feedback, noting genetic evolution accelerated in large populations over last 10k years (e.g. selection for blue eyes, lactose tolerance)  .
- Clark, G. (2007). A Farewell to Alms. Princeton Univ. Press – Proposed the idea of differential reproduction in pre-industrial England leading to genetic changes (supported by Piffer & Connor 2025 preprint: genetic EA scores rose 1000–1850 CE in England ).
- Woodley of Menie, M. et al. (2017). “Holocene selection for variants associated with general cognitive ability.” (Twin Res. Hum. Genet. 20:271-280) – An earlier study comparing a small set of ancient genomes to modern, suggesting an increase in alleles linked to cognitive function over time, laying groundwork for larger analyses.
- Hawks, J. (2024). “Natural selection on the rise.” (John Hawks Blog) – Reviews new ancient DNA findings, including the Akbari et al. results, and emphasizes how these data confirm an acceleration of human evolution in the Holocene (as Hawks and coworkers predicted in 2007)  .