## Imagining examples of a learned class

Humans are not only adept in recognizing what class an instance belongs to, but more remarkably, they can easily imagine instances of a category they have learned. Inspired by this, we propose a framework which enables discriminative artificial neural networks to generate novel examples after learning a category. Our proposed framework is based on a Markov Chain Monte Carlo method, called the Metropolis-adjusted Langevin algorithm, which capitalizes on the gradient information of the class distribution to direct its explorations towards regions of high probability, thereby imagining examples under various external constraints. Through extensive simulations, we demonstrate the efficacy of our proposed framework.

This figure shows model results for generating positive examples of the continuous exclusive-or problem under the constraint that they also fall on a particular sine function. The continuous exclusive-or problem has positive examples only in the upper left and bottom right regions of the input space. The generated examples are depicted as red dots; they are so close together that they appear to form solid red curves.

This figure shows model results for generating positive examples of the continuous exclusive-or problem under the constraint that they also fall on a particular sine function. The continuous exclusive-or problem has positive examples only in the upper left and bottom right regions of the input space. The generated examples are depicted as red dots; they are so close together that they appear to form solid red curves.

Importantly, our framework bridges conventionally distinct computational, algorithmic, and implementational levels of analysis. Our paper received an unusual 5/5 positive reviews from the highly competitive Cognitive Science Society annual conference, held in July 2017 in London. An earlier

Nobandegani, A. S., & Shultz, T. R. (2017). Converting cascade-correlation neural nets into probabilistic generative models. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. J. Davelaar (Eds.),

*arXiv*version of the paper was favorably reviewed by Synced: In-Depth AI Technology & Industry Review. They were impressed by our efforts in "bestowing important human-like ... features such as knowledge and imagination to machines."Nobandegani, A. S., & Shultz, T. R. (2017). Converting cascade-correlation neural nets into probabilistic generative models. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. J. Davelaar (Eds.),

*Proceedings of the 39th Annual Conference of the Cognitive Science Society*(pp. 1029-1034). Austin, TX: Cognitive Science Society.