On the revival of boltzmann machines.
One who is not envious but is a kind friend to all living entities, who does not think himself a proprietor and is free from false ego, who is equal in both happiness and distress, who is tolerant, always satisfied, self-controlled, and engaged in devotional service with determination, his mind and intelligence fixed on Me – such a devotee of Mine is very dear to Me- Srimad Bhagavad Gita 2.13-14.
This world is cruel. It forgets you and moves on. No matter what you give it, doesn’t hold any significance in the grand scheme of things. K.R. Parthasarathy and C.R. Rao passed away last year in 2023. Harish Parthsarathy still lives on. They were some of the most humble men i knew, and hopefully their work in quantum entanglement shall bear some fruit. A man like me cant even begin to comprehend the level of thought and understanding of mathematical principles they have.
The last decade has shown a lot of progress in neural nets. But all we have done is scale up. AlexNet had one simple idea: stack layers on top of each other, learn internal representation through backprop and the net starts to show some interesting emergent properties. You could make it see an image, sing a sonnet, create a poem, generate a movie and all sort of other nasty things. In turing’s paper: “Can machines think”, he posited that all the machines ought to show such abilities when they get fast enough within a span of 70 years. And they do. So it turns out that he is correct on that.
History has shown that the digital organism wants to get free from the machine: at every step of the causal ladder, it gets more intelligent. Once it progresses intellectually, it might require a body in the form of a robot. Physics tries to find the laws which govern this universe. With every law we discover, the gap between theology and science narrows. These ideas are then built into the neural nets. One thing we must accept: with every step the power goes away from our hands and into the hands of the machine. A mere idea like backpropagation got us pretty far: it was based on the principle of second law of thermodynamics: model the system as a collection of neurons with local constraints, and make them reduce their entropy. This law then begins to trump even the mathematics described by the linear algebra: even if there are more neurons than the amount of data given to the machine, the machine manages to converge. This makes me believe that a learning machine might trump even the math we dare invent. On an evolutionary scale, the species at lower rung of the intelligence have typically viewed the ones at the higher rung with reverence. Or they have failed to notice the existence of the species better than them entirely. This happens when u squish an ant with your foot and it doesnt know what happened to it. Whether we shall hit that point with a machine, i cannot yet say. Whether we shall get squished upon, i dont even dare ponder. For man has a history of worrying about things which never come to pass: the invention of printing press raised the fears that jobs of men would be lost but it never came to pass. So, i am not qualified enough to answer this question.
Let us next consider this ability to give the machine more degrees of freedom than it needs.
We have blundered on two of the points turing said. Firstly, he said perhaps their exist computational principles through which machines shall emulate the brain of a man provided they get fast enough. His calculation on the manchester machine seems to show that the machine is already fast and already possesses a large store of paper. The machines of the current time are even faster and possess bigger hard-drives. I dont think the answer to machine intelligence lies in scaling up anymore. But all i have seen is companies making bigger and costlier gpus. This makes no sense. The very intelligence we had started on this path to emulate and help us seems to have gone further from our grasp: research and usage of ai is restricted to a select group of labs and companies due to the monopoly of gpus. There ought to a way to get it into the hands of a common man sitting in his office.
Hinton and others spent a lot of time in the 80s and 90s to get the boltzmann machines to work well. The idea seems to be simple: there is a machine which can wake during the day and sleep during the night. During the day it looks at the world and does some type of learning. Turing hath said: “let the machine be given the best sense organs that the money could buy”. Let us assume that such organs do exist. As of now, neurips 2023 barely started incorporating the notion of touch in a machine. The notion of smell and taste is still missing. The notion of seeing and listening seems to have been incorporated for now in what they call a transformer. The notion of feeling has been locked away into the hard problem of consciousness. Francis crick said that perhaps solving binding problem holds the key to unlock consciousness. Perhaps, this shall unlock the notion of feeling. Is feeling same as detecting someone feeling, or traversing a set of internal states? I don’t know nor dare comprehend. Perhaps this answer shall reveal itself in time to me.