That which pervades the entire body you should know to be indestructible. No one is able to destroy that imperishable soul.- Srimad Bhagavad Gita 2.17.

Wow, it has been a long time, approximately 2 years since i started my phd and stopped writing this blog. Apparently, there was a long time in which i have been intellectually overwhelmed and finally the things seem to have fallen in place. A much recent update is that Dr. Geoff Hinton has retired, and left google. Furthermore, Gil Strang also seems to have given his last lecture. Patrick Winston and Marvin Minsky at MIT are no longer in this world. In such temporary times, who will carry the work of such great people forward? I just hope that someone worthy is working under some corner of this world.

Bhagavad Gita 2.17 says that the soul is imperishable, and that the entire knowledge of the being is compressed in the soul only. The outer body which is just material in nature is just a temporary phenomenon which gets changed as the soul transmigrates over the births. From the perspective of machine, this is equivalent to a mechanical body which is driven by a single brain.

Assuming that all the intelligence is packed into this brain, i focus now on the recent idea of glom which Hinton et al came up with last year (2021). A cell contains a singular genetic material which gets copied across the entire body and results in the emergence of organs. Therefore the entire knowledge of the being could be said to be compressed in this singular cell (which is governed by the supersoul). Mechanically, this is equal to a single embedding copied by several positional fields, much like how the decoders of the standard transformer networks work (i.e.in the detr based architectures). Much recently, this is a nice breakthrough i had been lucky to have, which will hopefully get revealed to other people later.

However, this is not how the standard networks work these days. Our works are assuming that additional knowledge requires additional parameters, and that intelligence is directly proportional to the number of neurons that our networks possess. Chat GPT 5 is estimated to have parameters in the range of a trillion parameters whereas a human brain is said to possess 80 billion neurons. If the idea of achieving general intelligence was just the count of neurons, then we would have already solved the holy grail of AGI, but it seems that we are far away from that.

Now, consider the origin of this intelligence. It all stems from a single cell, which is the DNA of the object. Once that DNA is copied, the organism gets formed, and then the intelligent behaviour seems to emerge. On reproduction, the same behaviour gets transferred by these genes and gets inherited by the offspring. Computationally, this resembles two parts, 1) a single embedding encodes all the knowledge of a neural network, 2) the same embedding gets copied across the image and becomes intelligent through some mechanism (which right now seems to be simple backprop). 3) there is a way to compress all this knowledge back into the original cell/genetic code in the step one which gets transmitted as the gene to the offspring. by the laws of the nature, it seems that a machine cant really become intelligent unless it creates multiple copies of these parameters. but, such mortal machines shall all start there existence with a single set of weights which is equivalent to dna of a child. (which consists of a singular global representation along with several levels of variation which are responsible for giving personal traits/knowledge to the machine. )

— update as of 2023: i have finally managed to build such a machine. once the manuscript is accepted, i shall reveal those details. all i can say for now is that hinton, turing and neumann were correct all along. self-reproducing automata could indeed be made to reproduce automaton more complex than themselves.