Thomas Shultz (PhD Yale, Psychology) is Professor Emeritus of Psychology and Associate Member of the School of Computer Science at McGill University. He taught courses in Computational Psychology and Cognitive Science. He is a Fellow of the Canadian Psychological Association, and a founder and twice Director of the McGill Cognitive Science Programs. Research interests include AI, cognitive science, cognitive development, evolution and learning, relations between knowledge and learning, decision making, problem solving, memory, neural networks, and agent-based modeling. He has over 440 research publications and over 8900 citations in these areas.
News
News
- We recently expanded our 2024 paper on how well GPT-4 understands what it reads. The article has been accepted for publication and will soon be available in the Royal Society Open Science journal. In the meantime, you can access the article at arXiv. Just search for the title: Text Understanding in GPT-4 vs Humans. We examine whether a leading AI system GPT-4 understands text as well as humans do, first using a well-established standardized test of discourse comprehension. On this test, GPT-4 performs slightly, but not statistically significantly, better than humans given the very high level of human performance. Both GPT-4 and humans make correct inferences about information that is not explicitly stated in the text, a critical test of understanding. Next, we use more difficult passages to determine whether that could allow larger differences between GPT-4 and humans. GPT-4 does considerably better on this more difficult text than do the high school and university students for whom these the text passages are designed, as admission tests of student reading comprehension. Deeper exploration of GPT-4’s performance on material from one of these admission tests reveals generally accepted signatures of genuine understanding, namely generalization and inference.
- Description of our work on humour is updated with a bizarre criminal incident in Newfoundland. Check it out under RESEARCH HIGHLIGHTS for 3 new jokes.
- We presented 3 papers and 2 abstracts at the Cognitive Science conference in Sydney, Australia, July 2023. One paper used our Neural Probability Learner and Sampler (NPLS) model to simulate so-called pure reasoning in infants. Another paper presented a perceptual front end to NPLS using a convolutional neural network, allowing more natural representation of physical objects such as collections of marbles. The third paper presented a simple model of number comparison that simulates fundamental empirical phenomena on accuracy and response time. This model also generalizes robustly to more advanced tasks involving multi-digit integers, negative numbers, and decimal numbers.
- Our paper simulating and explaining the learning and use of probability distributions in infants is in the November 2022 issue of Psychological Review.
- Our invited chapter on computational models of developmental psychology is published in The Cambridge Handbook on Computational Cognitive Sciences (2023).
- Our invited chapter on Computational approaches to cognitive development: Bayesian and artificial-neural-network models has been published in The Cambridge Handbook of Cognitive Development (2022).
- Our paper showing that group membership trumps perceived reliability, warmth, and competence in social learning is published in Psychological Science (2022).
- More information on each of these foregoing papers can be found under PUBLICATIONS / Learning and development.
- Our invited chapter on the Cascade-Correlation machine learning algorithm for the Encyclopedia of Machine Learning and Data Science is published online at Springer (2022). More information on this chapter can be found under PUBLICATIONS / Neural networks.