IBM Research, Almaden
Affiliation and Previous Postdoc:
Synthetic Cognition Group
Los Alamos National Labs
University of Illinois at Urbana Champaign
Thomas M. Seibel Center for Computer Science
Complex recognition abilities are inherent in even the simplest of organisms, yet is one of the most difficult problems for artificial methods. The responsible neural organization is yet unappreciated. Models based on biological insight can help both understanding of the brain and furthering algorithm development. Thus, I pursue a multidisciplinary approach to recognition-classification with a biological background (Neuroscience, Neurology, and Cognitive Psychology) as a guide and Computer Science as test bed.
One insight involves top-down feedback connections (where output units or neurons feed back to their own inputs). This can be found ubiquitously in the brain including the well-studied olfactory bulb (which has at least two layers of feedback). However, most conventional artificial neural network theories do not incorporate top-down feedback during their recognition phase. Instead, they incorporate top-down feedback in the learning phase.
My models focus on top-down feedback during recognition. Top-down feedback continuously modifies input activation. The modified input activity is re-distributed to the network and receives feedback on this re-distribution. This is repeated iteratively to recognize inputs.
This paradigm requires a different symbolic-like form of weights, enabling simpler representations of the fixed-points or solutions of the networks. It also inherently displays certain human cognitive phenomena. Overall these findings challenge traditional assumptions and offer a biologically plausible, flexible, and dynamic approach to recognition.Note: Over the years I have explored several names for this configuration including: Input Shunt Networks, Recurrent Loop Networks, Recurrent Feedback Neural Networks, Input Feedback Networks, Regulatory Feedback Networks, Supervised Generative Models During Recognition. However, they all represent the same "Supervised Generative Models During Recognition".
Selected Papers / Proceedings
Achler, T., Towards Bridging the Gap between Pattern Recognition and Symbolic Representations Within Neural Networks, Neural-Symbolic Learning and Reasoning, AAAI-2012 PDF
Achler, T., Artificial General Intelligence Begins with Recognition: Evaluating the Flexibility of Recognition, Chapter in Theoretical Foundations of Artificial General Intelligence 2012 In-press. PDF
Achler, T., Supervised Generative Reconstruction: An Efficient Way To Flexibly Store and Recognize Patterns, arXiv:1112.2988 2011 PDF
Achler, T., Non-Oscillatory Dynamics to Disambiguate Pattern Mixtures, Chapter 4 in Relevance of the Time Domain to Neural Network Models 2011. PDF
Achler, T., Bettencourt, L., Evaluating the Contribution of Top-Down Feedback and Post-Learning Reconstruction, Biologically Inspired Cognitive Architectures AAAI Proceedings, 2011. PDF
Achler, T., Vural D., Amir, E., Counting Objects with Biologically Inspired Regulatory-Feedback Networks, Neural Networks IJCNN IEEE Proceedings, 2009. PDF
Achler, T., Omar C., Amir, E., Shedding Weights: More With Less, Neural Networks IJCNN IEEE Proceedings, 2008. PDF
Achler, T., Amir, E., A Genetic Classifier Account for the Regulation of Expression, Chapter in Computational Neuroscience, Springer, 2010. PDF
Workshops / Tutorials