The IQ test dates back to the late 1800s. The first test to measure intelligence looked at how quickly a person responded to stimuli. However, people largely abandoned this method when they realized that the speed test did not accurately predict a person’s intelligence.
Alfred Binet created the first modern intelligence test in 1905. He developed the test to determine whether or not a child would be able to keep up with their peers in the educational system. Binet used age as the means of control.
He created a test that arranged questions based on the average ability of children of different ages. In this way, the test could show how a child performed compared with other children of a similar age.
For example, if a child was able to answer questions for children 2 years older, that child would test as being 2 years ahead in “mental age.” Binet would then subtract that “mental age” from the child’s real age to give an intelligence score.
Though Binet’s model was an improvement in determining intelligence, it had some flaws.
William Stern proposed a different model: IQ. Instead of subtracting the mental age, Stern proposed dividing a person’s mental age by their actual age. The formula he proposed was (mental age) / (chronological age).
Still, Stern geared his version of the IQ test toward children, which meant that it would not work for adults.
Eventually, Donald Wechsler solved this issue by comparing test scores with those of a person’s peers and normalizing average scores to 100.
Therefore, the quotient is no longer a quotient at all. Instead, it is a comparison between how a person performs compared with their peers.
The U.S. military adapted this test to create a multiple choice test, which they later started to use. Over time, educational and work settings also started to use IQ tests to help determine a person’s intellectual strengths.
Source Article from https://www.medicalnewstoday.com/articles/327241.php