bookmark_borderTransformers Aren’t Turing-complete, But a Good Disguise Is All You Need

Transformers are a neural network architecture. They are behind some of the most successful large language models (LLMs) we see today, like GPT-3, PaLM, BARD, and GPT-4. I have seen several papers claiming that transformers are Turing-complete, meaning that they can be used to simulate any computer program.

But transformer architectures are not Turing-complete. They cannot simulate computer programs. The papers that claim otherwise are making a conceptual error. Transformers have been impressive and extraordinary as tools but we need to be honest about what they can do and about the challenges that lie ahead of us on the path toward true artificial general intelligence.

Continue reading “Transformers Aren’t Turing-complete, But a Good Disguise Is All You Need”

bookmark_borderIt’s Not Intelligent If It Always Halts: A Critical Perspective on Current Approaches to AGI

Imagine a conversation with one of these newly released AI chatbots. You ask it it to solve a tricky math problem. It responds with “That seems kind of hard. Give me some time to think.”. After a few minutes it comes back with “I haven’t solve it yet. And I am not sure I can. Would you like me to continue working on it?”. Another few minutes pass and then it comes back with “Aha! I figured it out!” and it proceeds to explain a neat and creative solution.

This scenario can never occur with PaLM, BARD, GPT-4, or any of the other transformer-based large language models that are thought to be on the path to general intelligence. In all of these models, each word in the machine’s response is produced in a fixed amount of time. The model cannot go away and “think” for a while. This is one of the reasons why I believe a solely transformer-based model can never be “intelligent”. (If you disagree with my characterization of transformers here, see section 4 and also this post).

Summary: I argue here, that intelligence requires the ability to explore “trains of thought” that are potentially never-ending. One cannot know a priori if a certain train of thought will lead to a solution or if it is futile. The only way to find out is to actually explore. And this type of exploration comes with the risk of never knowing if you are on the path to a solution or if your current path will go on forever. Intelligence involves problem-solving, and problem-solving requires arbitrary amounts of time. If a computer program is bound to finish quickly by virtue of its architecture, it cannot possibly be capable of general problem-solving.

In the summary paragraph above, I appealed to a number of intuitive notions (e.g. “train of thought”, or “exploration”, or “problem-solving”). In order to make my argument rigorous, I have to first introduce a few concepts rooted in classical theory of computation. In section 1, I will introduce three types of computer programs. In section 2, I describe what an unintelligent problem-solver can look like. In section 3, I describe what is needed to make the unintelligent problem-solver intelligent. In section 4, I explain why transformers can never be general problem-solvers. In section 5, I briefly discuss what I think needs to be done to address this problem.

bookmark_borderThe Researcher’s Guide for Being Mind Blown by a Neural Network

Every so often a new neural network makes headlines for solving a computation problem. It is sometimes hard for me to judge how impressive these achievements are without diving into the details of the models. But my criteria are always the same and it should be easy for those who are familiar with their models to evaluate based on these criteria. For this purpose I have made a flowchart for how impressed I would be at a neural network. If you know of a new neural net that reaches “wow” please let me know about it, and if it reaches “mind-blown” you have permission to wake me up in the middle of the night – since I know no examples.

Continue reading “The Researcher’s Guide for Being Mind Blown by a Neural Network”

bookmark_borderA 7 Minute Timer Has Been Discovered in Neurons

How does the brain keep track of time? This question has been intriguing neuroscientists for decades. Circadian clocks, which oscillate every 24 hours, are known to be implemented at the level of molecules and genes. But it is widely believed that keeping track of time for shorter durations (e.g. seconds and minutes) arise from electrical/synaptic activity patterns, not from molecular activity. The idea is that cells can be connected in ways that result in oscillations or sequential activity (e.g. one neuron fires at the 1s mark, the next fires at the 2s mark, etc.). As with most of our theories of short-term memory, if all the cells in a network go silent for a moment the timer falls apart. The spiking activity is what keeps the clock going. This theory has had its opponents, but I think it is fair to say that it has been a commonly held view in neuroscience.

A recent study, however, has made a serious crack in this paradigm. In a series of two papers from the Crickmore lab at Harvard University (one published last year and another last month), Thornquist and colleagues show that a single neuron can keep track of time in a completely silent manner. The time interval they studied was a 7 minute period in mating fruit flies. I believe this is a landmark study that every neuroscientist should know about. So here is my attempt at explaining it in simple terms.

Continue reading “A 7 Minute Timer Has Been Discovered in Neurons”