• Contributor

The Intellectual Decline of the Army Officer Corps: Why Army Officers are Getting Dumber

by 1LT "Hawkeye" (a pseudonym)

If you’re uncomfortable dealing with intellectual ambushes from your own ranks, it’ll be a heck of a lot worse when the enemy does it to you.

-Jim Mattis, Callsign Chaos: Learning to Lead6

It is no secret that the military’s highest performing officers are leaving the Army in large numbers.1,2,4 Think tanks, senior officers, and junior officers themselves have been sounding the alarm for years. However, among all the discussion and efforts of how to retain and develop talent (see AIM 2.0, Army Talent Management Task Force, Battalion Commander’s Assessment Program) there is very little discussion of if the Army is acquiring the right people to begin with. Data on the recruitment of West Point and ROTC cadets show clear trends away from using quantitative intellectual measures to assess and select candidates applying to commissioning programs.6 Since the implementation of the All-Volunteer force the Army officer corps has seen a significant decline in the innate intellectual capacities of those it commissions.

In 2017 the US Army War College Strategic Studies Institute published an in depth look at the declining use of rigorous quantitative measures of the cognitive abilities of officer candidates.5 While the standards associated with awarding a commission has historically varied according to manpower needs (i.e. decreased standards during large wars due to the need for officers) there are general trends that can be viewed over time. One of the largest trends has been the decline of standards for testing cognitive ability of officer candidates. Specifically, after the Vietnam war and the implementation of the All-Volunteer Force there have been steady declines in heterogeneous officer recruiting standards in order to make recruiting numbers.

During the major conflicts of the 20th century officer production varied greatly according to the needs of a conflict. But, following the Vietnam war and the creation of the All-Volunteer Force there were a series of changes that began to focus less on assessing cognitive ability and more on assessing characteristics associated with career longevity among officer candidates.5 As the Army changed its personnel system, it found that lowering accession standards was an easy way to meet its target numbers for commissioning. Until 1976 ROTC programs had no commissioning goal and were instructed to simply produce as many officers as they could.

ROTC, making up 75% of all officer commissions in the 1970s, saw moves away from academically elite institutions towards larger, and less selective, public institutions.5 Since 1972 over 70% of all ROTC graduates have come from schools defined as “Less Competitive” or “Non-Competitive”, as defined by Barron’s Profile of American Colleges. Even within these schools, ROTC cadets performed academically worse than their non-ROTC peers. A 1976 study by Card and Shanner from the Army Research Institute showed that cadets had lower grade point averages, lower verbal aptitude, and lower academic ability than non-ROTC peers.14

After attempts to standardize ROTC commissioning requirements were found to be too restrictive, the ROTC qualifying (RQ) exam was dropped in order to meet commissioning numbers. The less rigorous Cadet Evaluation Battery (CEB) was used in its place, but 60% of this new exam measured non-cognitive characteristics. Army research eventually found that a CEB score of 80 (for a normalized score of 100), the cutoff score for ROTC programs, was found to correlate with an SAT score of 650 (out of 1600). SAT score distributions have been generally historically stable since the 1970s, so they serve as a good point of comparison for an exam that is known to heavily test general cognitive ability. In 1976 the average composite SAT score was 1006 and in 2016 it was 1002. A CEB score of 80, correlating a composite SAT score of 650, would place an individual in the 99th percentile of testers in the 2016 score distribution. That is to say, that the Army was willing to give commissions to individuals who did worse than 99% of all their peers on a quantitative cognitive assessment.

During the transition to an All-Volunteer force, West Point, also saw declines in cadet intellectual quality. The Army continued to place emphasis on characteristics associated with prolonged service in recruiting officers. West Point is often viewed as the most selective commissioning source, but even its standards have been in decline. The below chart shows cutoff composite SAT scores for the middle 50% of admitted students at several universities, as well as average SAT scores. West Point’s score distribution is shown alongside a major state university (Ohio State, admission rate 52%), a selective state university (University of Virginia, admission rate 29.9%) and an elite private institution (Harvard, admission rate 5.4%). West Point’s admission rate is 10%.

The average West Point cadet, with an SAT score of 1250, would just barely pass the 25th percentile of Ohio State students. Even being in the 75th percentile of West Point cadets, with a score of 1350, would barely place a student above the 25th percentile of admitted students at the University of Virginia. There is little doubt that there are other characteristics that are important in determining the quality of candidates to lead soldiers as officers. Furthermore, the SAT is not a perfect measure of cognitive ability. However, West Point has a difficult case to make for it being an academically elite institution if its own cadets are so far behind other schools in measured quality of general cognitive ability.

In 1978 the Chief of Staff of the Army commissioned the Review of Education and Training for Officers (RETO) study to better understand concerns about the declining quality of incoming lieutenants. The RETO study placed specific criticisms on the lack of consistent evaluation methods across commissioning sources (OCS, ROTC, and USMA). In response, the Army developed the Officer Selection Battery (OSB). The OSB was meant to be applied across all commissioning programs with a cutoff score of 97 (correlating to an SAT score of 850). However, the OSB was never formally adopted because of concerns of how it would affect recruiting numbers. In 1984 ROTC programs completely abandoned cognitive testing.

Over the next few decades only 4-year ROTC scholarship awardees would have a required cognitive test in the form of SAT cutoff scores. Until 2015 Cadet Command did not even gather data on the SAT scores of all of its cadets. In 1998 Cadet Command went from a centralized selection board for scholarships to the Campus Based Scholarship Program. This new program gave Professors of Military Science at ROTC programs more control over how they could make their recruiting numbers, which was now the number one mission of ROTC programs. Cadets without 4-year scholarships saw no consistent cognitive testing or cutoffs in order to get a commission.

In the early 2000s Cadet Command had a decreasing quality of candidates. For cadets that took the ASVAB in the early 2000s, there were approximately 1,500 at any given time, between 20 - 30% fell below the 110 cutoff score used for OCS. Even for West Point, in 2006 25% of those who took the ASVAB fell below the 110 score mark. Furthermore, in 2006 Cadet Command lowered the minimum GPA requirement for scholarship cadets from 2.5 to 2.0. Between 2006 and 2015 over 50,000 scholarship cadets commissioned with a GPA between 2.0 and 2.49.5 This decline is even worse than it may initially seem due to historic grade inflation across American universities. At a time when it was easier than ever to have a higher GPA, ROTC programs were lowering academic standards for those who would lead Soldiers.

In the early 2000s OCS gained an increasing share of the officer producing mission, going from 9% of commissioned officer in 1998 to 40% by 2008. 60% of OCS graduates were direct from college (enlistment option), making up 24% of total army officers. The shift among OCS graduates from prior-enlisted officers to those coming without any prior service has been dramatic. OCS, like other commissioning sources, has focused more on assessing criteria associated with career longevity among its candidates. OCS handed out increasing numbers of waivers for low ASVAB scores in order to meet the need for officers in the first decade of the new millennium. Two Army Research Institute researchers found that:

In terms of performance, differences were notable on the final Army Physical Fitness Test score, leadership performance, and the total OCS score. These differences were large and significant, with in-service candidates scoring higher than enlistment-option candidates on all measures.16

Enlistment option candidates did have, on average, higher levels of civilian education and higher scores on the Armed Forces Qualification Test (AFQT). However, OCS suffers from some of the same problems as USMA and ROTC, namely a lack of rigorously well-defined and enforced standards for commissioning. OCS has regularly shifted its accession criteria to meet output numbers and produce officers that are more likely to stay in the Army.

Why was the Army diminishing the importance of, and entirely dropping, formal cognitive testing as part of commissioning accessions? To be very blunt, the Army was, and is, more interested in measuring characteristics that correlate with officers’ organizational commitment and likelihood to stay in the service than their cognitive ability. In a volunteer force, numbers became everything, and tests that were seen as too difficult were done away with, waivers were granted, and standards were lowered. The Army lacks any sort of universal testing for officer accessions. ROTC, OCS, and West Point all have different standards than can be applied with a large degree of discretion by senior cadre.

Marine Corps Decline in Officer Quality

Due to the lack of standardized measures of cognitive ability across Army commissioning sources it can be difficult to see trends. The Marine Corps has officers take the General Classification Test (GCT) and has a large body of data to reference. The GCT was originally developed during WWII for all services, and was found to have highly correlated outcomes with service member performance. The GCT was eventually replaced by the ASVAB for all services except Marine officers. Because the GCT is not re-normalized over time, unlike the SAT or ASVAB, it offers very good absolute comparisons across time.

A 2015 study examined the changes in Marine officer GCT scores over time.3 The study found a decline in GCT scores since the 1980s, a time when the All-Volunteer Force was being implemented. The study found that:

the GCT score in 1980 that demarcated the lower one-third of new officers that year demarcated the lower two-thirds of the new officers in 2014. While 85 percent of those taking the test in 1980 exceeded 120, the cut-off score for Marine officers in

World War 2, only 59 percent exceeded that score in 2014

The fact that only 59% of Marine officers today are able to achieve the cutoff score of officers during WWII, a time when the Marine Corps was several times its current size, is of serious concern. The below graph, taken from the study, visually illustrates the downward trend for GCT score distributions over time.

The study explains that since the creation of the All-Volunteer Force there has been an increasingly larger pool of people with a 4-year college degree who are eligible for commissions, thus leading to a larger, and less selective pool of officers.

The key point is that the pool of those attending and completing college has increased dramatically over time, increasing the pool of potential officer candidates. If the expansion of this pool over time is biased towards increasing those who were less well-suited for higher education, then the average intellectual ability of college graduates will decrease over time. This will be reflected in a decrease in the average GCT score over time.

The implications of these trends for Army officers is not encouraging. Officer recruiting and selection is not widely different between the Army and Marine Corps. If anything, the Marine Corps being a smaller service component allows it to be more selective in picking officers. The Marine Corps at least has a widely used screening tool with predictive ability for its officers. The Army lacks this basic tool to be able to make longitudinal comparisons of officer quality. It is not unreasonable to assume that the same quantifiable trends that have been going on for Marine officers would be reflected among Army officers, for much the same reasons.


Many may argue that there are other characteristics that should hold weight in officer accessions for the Army besides cognitive ability. While it may by a technically correct statement, it has little weight in the current disparate accessions models across commissioning sources that place very little value on quantified cognitive ability. Put simply, there is no measure of cognitive ability currently employed with a rigid cutoff across Army commissioning sources.

During the All-Volunteer force the Army has relied almost entirely on a single factor to asses the cognitive ability of officers, the undergraduate degree. However, since the 1970s the number of people in the US who attend and obtain a 4-year college degree has increased dramatically. In 2018 almost 35% of adults age 25 or older in the US had a bachelor’s degree.15 This is in comparison to rates close to 10% of the same population in the 1970s. While the expansion of the pool of people who obtain a 4-year degree is unquestionably a good thing for the US, it had had other consequences for officer recruiting. The rapid expansion of the college population has also expanded the pool of potential officers. It is not unreasonable to assume that the expanding college population has been preferentially biased to those who were previously less intellectually prepared for rigorous higher-level education. Thus, the officer selection pool has become less selective, as what used to be a measure of significant scholastic and intellectual achievement has become less so. To illustrate the point, both Princeton University and The Citadel commission officer through Army ROTC Programs. Princeton has an admission rate of 6.4% and an SAT average of 1510 (98th percentile nationally). The Citadel has an admissions rate of 81.1% and an SAT average of 1118 (approximately 60th percentile nationally). The students who choose to commission from Princeton, though few in number, will have met an incredibly higher intellectual screening standard than those from The Citadel, but will be 2LTs all the same.

The Army does not need to develop a new test of cognitive and learning ability. There are already many available tests with strong correlation with general cognitive ability, among them the SAT, ACT, and the Air Force Officer Qualifying Test (AFOQT). If the Army wants to have the smartest possible officers leading formations and making decisions on the battlefield, a universally administered test such as the AFOQT should take a prominent role in the accessions process for all commissioning sources.

The Army’s focus on making recruiting numbers and emphasizing characteristics associated with career longevity has created a self-licking ice cream cone among its officer corps. Meanwhile, across all commissioning sources, there has been both a relative and an absolute decline in the cognitive abilities of officers. This may matter little for the day to day operations of lieutenants, but if the Army wants to have the best possible operational and strategic thinkers to win wars it has no options for lateral entry. If large numbers of lieutenants are commissioning without cognitive screening, or with lowered standards, it will only produce field and general grade officers who are not intellectually equipped to deal with the complex problems of our nation’s defense.

Below are a few common objections that readers might have with this article. We encourage people to read through the source material and better understand the nature and magnitude of the problem that Army is facing.

  1. The Army indirectly tests the cognitive ability of officers by requiring a degree

In 1940, when the Census Bureau first asked respondents about their educational levels, only 4.6% of Americans 25 or older had a 4-year degree. As of 2015 33% of adults 25 or older have a 4-year degree.7 Where a college degree used to be a marker of academic and intellectual ability, it is now seen almost as a necessity to work in a knowledge-based career field, in which officers should be included.

To argue that the intellectual quality and rigor of all universities and degree programs is equivalent is simply wrong. The majority of ROTC program exist at schools that are defined as non-competitive or less competitive. West Points’ SAT admittance scores fall below that of large state schools with over 50% admittance rates. A person can gain a commission through the ROTC program at Arizona State University, a school with an 82.5% admittance rate where a 1120 SAT score (58th percentile nationally by 2016 scores) will gain admission.8

Alternatively, Princeton University has an Army ROTC program, where GEN. Mark Milley, current Joint Chief of Staff graduated from. Princeton has an acceptance rate of 6.4% and a 1440 SAT score represents the 25th percentile of Princeton students (96th percentile nationally by 2016 scores). To argue that cognitive screening for Army officers should simply be left to degree granting institutions, of which there are many and of varying quality, is to not care at all about the intellectual capacities of officers. The Army cannot passively cede its own responsibility to assess officers. Screening and assessing candidates for admittance to the officer corps is a fundamental aspect of a profession.

  1. You cannot really test how “smart” someone is on paper. Some people are just not good test takers.

While we may lack formal education in psychology and psychological research, there is no lack of peer reviewed research that a lay person outside of the field can access and understand via a thorough reading. A particularly useful meta-analysis is a 1998 research summary by Schmidt and Hunter that covers of 85 years of psychological research on personnel selection methods. General cognitive ability, “g”, is one of the most widely studied topics in all of psychology, so the authors did not lack data on the subject. They concluded that:

The most well known conclusion from this research is that … the most valid predictor of future performance and learning is general mental ability ([GMA], i.e., intelligence or general cognitive ability)

The authors look at a wide variety of selection methods and their validity in predicting job performance. They found that:

Overall, the 3 combinations with the highest multivariate validity and utility for job performance were GMA plus a work sample test (mean validity of .63), GMA plus an integrity test (mean validity of .65), and GMA plus a structured interview (mean validity of .63)

The general conclusion of this meta-analysis is that not only can you test general cognitive ability, but that it is a strong predictor of future performance and success. There is a wide body of research, including very readable meta-analysis papers, that further bolster the claims of predictability and validity for general cognitive ability testing. So, to very briefly summarize decades of reproducible research, it is possible to test general cognitive ability and general cognitive ability is one of the strongest predictors of future success out there.

  1. Being smart does not necessarily mean you will be a good leader.

There are a wide range of characteristics that go into making a successful leader, but the Army has defined intellect as one of the three leadership attributes for a reason. Chapter 4 of ADP 6-22 Army Leadership states that:

Intellect is fundamental to successful leadership. Intellect consists of one’s brain power and knowledge. Intellect enables leaders to think creatively and critically to gain situational understanding, make sound judgments, solve problems, and take action. Intellect allows leaders to reason analytically, critically, ethically, and with cultural sensitivity.

If the Army does not rigorously and quantitatively screen officer candidates for cognitive ability, or intellect, how can it truly argue that it values it as an attribute among its leaders? There are of course ways to teach and develop leaders in the intellectual domain, but most psychological research suggests that cognitive ability, as measured by many different tests, is largely set by the time a person reaches adulthood. Cognitive ability can be changed slightly, but not by much, and gains are often fleeting.

Cognitive ability is only one aspect of leader attributes, but it is an important one. As officers advance through the ranks and deal with increasingly complex operational and strategic problems the cognitive ability they have becomes increasingly important in their ability to understand and solve problems. Officers are and should be looked at as knowledge-based professionals, but with no standardized accessions measure of officers’ abilities to learn and process knowledge, the Army is losing out.


  1. Barno, David. “Military Brain Drain.” Foreign Policy, Foreign Policy, 13 Feb. 2013, foreignpolicy.com/2013/02/13/military-brain-drain/.

  2. Canter, Samuel. “Officer Specialization in the United States Army: The Solution to the Junior Officer Brain Drain and Generals Who Over-Generalize Is One and the Same.” Small Wars Journal, smallwarsjournal.com/jrnl/art/officer-specialization-united-states-army-solution-junior-officer-brain-drain-and-generals.

  3. Cancian, Matthew. Klien, Michael. “Military Officer Quality in the All-Volunteer Force”. The National Bureau of Economic Research. August 2015.https://www.brookings.edu/wp-content/uploads/2016/06/Military-officer-quality-in-the-all-volunteer-force.pdf

  4. Kane, Tim. “Why Our Best Officers Are Leaving.” The Atlantic, Atlantic Media Company, 19 Feb. 2014, www.theatlantic.com/magazine/archive/2011/01/why-our-best-officers-are-leaving/308346/.

  5. Coumbe, Arthur T., et al. Still Soldiers And Scholars?: An Analysis Of Army Officer Testing. US Army War College, Strategic Studies Institute, 2017.

  6. Mattis, Jim, and Jim West. Call Sign Chaos: Learning to Lead. Random House, 2019.

  7. Ryan, Camille L., and Kurt Bauman. “Educational Attainment in the United States: 2015.” Census.gov, US Census Bureau, Mar. 2016, www.census.gov/content/dam/Census/library/publications/2016/demo/p20-578.pdf.

  8. Admission Requirements, Arizona State University. https://admission.asu.edu/first-year/apply

  9. The Air Force Officer Qualifying Test : validity, fairness, and bias / Chaitra M. Hardison, Carra S. Sims, Eunice C. Wong. RAND Corporation. 2010. https://www.rand.org/content/dam/rand/pubs/technical_reports/2010/RAND_TR744.pdf

  10. Air Force Officer Qualifying Test (AFOQT): Estimating the General Ability Component. Earles, James A ; Ree, Malcolm J. ARMSTRONG LAB BROOKS AFB TX. December 1991.

  11. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274. https://doi.org/10.1037/0033-2909.124.2.262

  12. Frey, Meredith C., and Douglas K. Detterman. “Scholastic Assessment or g?” Psychological Science, vol. 15, no. 6, 2004, pp. 373–378., doi:10.1111/j.0956-7976.2004.00687.x.

  13. Mackintosh, N. J. (2011). IQ and Human Intelligence (second ed.). Oxford: Oxford University Press. ISBN 978-0-19-958559-5.

  14. J. J. Card and W. M. Shanner, Development of a ROTC/Army Career Commitment Model: Management Summary Report, Palo Alto, CA: American Institutes for Research, March 1976

  15. Ryan, Camille; Siebens, Julie (March 2016). "Educational Attainment in the United States: 2015" (PDF). U.S. Census Bureau. Retrieved December 22, 2017.

  16. Ibid.; Douglas K. Detterman and Robert J. Sternberg, eds., Transfer on Trial: Intelligence, Cognition, and Instruction, Norwood, NJ: Ablex Publishing, 1993; Rebecca Grossman and Eduardo Salas, “The Transfer of Training: What Really Matters,”

International Journal of Training and Development, Vol. 15, Iss. 2, 2011, pp. 103-120.

12,023 views8 comments

Recent Posts

See All