Tsvi Achler MD/PhD/EECS




CV
Tutorial Video
Email: achler@gmail.com



 

RESEARCH INTERESTS:

Sensory recognition is an essential foundation upon which cognition and intelligence is based. Without recognition the brain cannot interact with the world. The internal form in which recognition information is stored dictates how memory and processing is achieved. Moreover, currently diseases and injuries affecting circuits responsible for sensory processing and recognition are incurable (e.g. Alzheimer's, strokes, Schizophrenia, Parkinson's, and so on). Ultimately the goal is to obtain a better understanding in order to dynamically predict, interact with, and provide treatment strategies for diseases affecting processing.

The fundamental question I study is: in what form does the brain store information in order to perform the most flexible recognition? Understanding of the underlying biologically motivated computations responsible for flexible recognition will have a very broad impact in neuroscience, cognitive psychology, computer science, and many other fields.

One insight involves top-down feedback connections (where output units or neurons feed back to their own inputs). This can be found ubiquitously in the brain, however, most conventional artificial neural network theories do not incorporate top-down feedback during their recognition phase. I focus on top-down feedback during recognition where Top-down feedback modifies input activation. The modified input activity is re-distributed to the network which generates feedback on this re-distribution. This is repeated iteratively to recognize inputs.

This paradigm promotes a simpler, symbolic-like form of weights enabling simpler representations. It also inherently displays human cognitive phenomena which traditional classifiers do not such as: a speed-accuracy tradoff and difficulty with similarity. Overall these findings challenge conventional assumptions and offer a biologically plausible, flexible, and dynamic approach to recognition.

Note: Looking through my publications, over the years I have explored different names for this configuration including: Input Shunt Networks, Recurrent Loop Networks, Recurrent Feedback Neural Networks, Input Feedback Networks, Regulatory Feedback Networks, Supervised Generative Models During Recognition. However, they all represent the same suppervised networks that use top-down feedforward-feedback connections During Recognition.

CV

REPRESENTATIVE WORK:

Selected Papers / Proceedings

Achler, T., Symbolic Neural Networks for Cognitive Capacities, Biologically Inspired Cognitive Architectures Special Issue on Neural Symbolic Networks, 2014. PDF

Achler, T., Supervised Generative Reconstruction: An Efficient Way To Flexibly Store and Recognize Patterns, arXiv:1112.2988 PDF

Achler, T., Towards Bridging the Gap between Pattern Recognition and Symbolic Representations Within Neural Networks, Neural-Symbolic Learning and Reasoning, AAAI-2012 PDF

Achler, T., Artificial General Intelligence Begins with Recognition: Evaluating the Flexibility of Recognition, Chapter in Theoretical Foundations of Artificial General Intelligence 2012 PDF

Achler, T., Non-Oscillatory Dynamics to Disambiguate Pattern Mixtures, Chapter 4 in Relevance of the Time Domain to Neural Network Models 2011. PDF

Achler, T., Bettencourt, L., Evaluating the Contribution of Top-Down Feedback and Post-Learning Reconstruction, Biologically Inspired Cognitive Architectures AAAI Proceedings, 2011. PDF

Achler, T., Amir, E., A Genetic Classifier Account for the Regulation of Expression, Chapter in Computational Neuroscience, Springer, 2010. PDF

Achler, T., Vural D., Amir, E., Counting Objects with Biologically Inspired Regulatory-Feedback Networks, Neural Networks IJCNN IEEE Proceedings, 2009. PDF

Achler, T., Omar C., Amir, E., Shedding Weights: More With Less, Neural Networks IJCNN IEEE Proceedings, 2008. PDF

Achler, T., Amir, E., Input Feedback Networks: Classification and Inference Based on Network Structure, Artificial General Intelligence AAAI Proceedings V1: 15-26, 2008. PDF.


Older Talk Video


Workshops / Tutorials

Metrics for 'Human Level' AI

Tutorial - Plasticity Revisited: Motivating New Algorithms Based On Recent Neuroscience Research


Software

Matlab