Neural networks
With Yoshio Takane and Yuriko Oshima-Takane of McGill, and Jeff Elman of UCSD, I developed several new techniques for analyzing knowledge representations in constructivist neural networks, including contribution analysis, an extension of a technique initially used in another context by Sanger. Such techniques trace the knowledge representations that networks develop over time, which can then be compared to knowledge representations of children at different ages. Unlike other techniques, of hidden-unit activations or connection weights, contribution analysis examines unit activations and weights simultaneously. Network contributions are the products of sending-unit activations and connection weights entering output units, thus effectively summarizing the network’s interpretation of a problem just before it makes an output response. Subjecting the usually large and complicated matrix of problems by contributions to Principal Components Analysis frequently reveals the nature of the network’s current knowledge representation of a problem, and is surprisingly consistent across different networks. The fact that constructive networks include cross-connections that bypass layers of hidden units underscores the inadequacy of analyses based on hidden-unit activations alone.
Almost all connectionist modeling is unrealistic in the sense that it begins from scratch (with random connection weights), whereas human learning more typically utilizes current knowledge. With former graduate student Francois Rivest, I invented a new algorithm, knowledge-based cascade-correlation (KBCC), that recruits previously learned networks as well as single hidden units. Our evidence indicates that KBCC recruits the most relevant knowledge that it possesses, providing a significant speed-up of learning compared other neural algorithms. KBCC is the only knowledge-based neural algorithm to place no restrictions on the inputs and outputs of source knowledge (other than the constraint that the source knowledge can be represented as a differentiable function) making it more generally useful than other techniques. Current work focuses on applying KBCC to large scale, realistic problems and to modeling psychological findings on the relation between knowledge and learning. An interesting feature of KBCC is that it appears to implement a novel type of neural compositionality, something that is widely believed to be impossible for unstructured neural networks.
Almost all connectionist modeling is unrealistic in the sense that it begins from scratch (with random connection weights), whereas human learning more typically utilizes current knowledge. With former graduate student Francois Rivest, I invented a new algorithm, knowledge-based cascade-correlation (KBCC), that recruits previously learned networks as well as single hidden units. Our evidence indicates that KBCC recruits the most relevant knowledge that it possesses, providing a significant speed-up of learning compared other neural algorithms. KBCC is the only knowledge-based neural algorithm to place no restrictions on the inputs and outputs of source knowledge (other than the constraint that the source knowledge can be represented as a differentiable function) making it more generally useful than other techniques. Current work focuses on applying KBCC to large scale, realistic problems and to modeling psychological findings on the relation between knowledge and learning. An interesting feature of KBCC is that it appears to implement a novel type of neural compositionality, something that is widely believed to be impossible for unstructured neural networks.