Talks
I am currently on the job market and actively searching for new opportunities!If my research can contribute in any way, please reach out at rajatmodi62@gmail.com
I have given several talks over the years, which are linked below.
Paper Talks
[1] Making GLOM Work:
GLOM was an idea by Geoffrey Hinton in 2021. Here, we discuss what were some key ideas towards getting it to work.
[2] Putting an End to End to End Gradient Descent:
A learning algorithm which trains each layer of a neural net in ‘isolation’ greedily. Optimization is based on a local-constraint, and gradients are ‘not propagated’ among different layers. Each layer predicts activations for what will happen in the given input sequence ‘k’ steps into the future. Uses ideas from InfoMax principles from Van Oord et. al. Core idea is that the representation at current time step t, assumes representation at time t+k is positive sample, whereas t+k+1….t_k+N is negative.
[3] Forward Forward Algorithm: Part 1/2:
Algorithm by Geoff Hinton. Neural net is optimized by piping input data sequentially with two forward passes 1) first pass involves positive data 2) second pass involves negative data. Each layer is trying to learn a separate classification problem of outputting high responses for positive data, and low responses for negative data.
[4] Forward Forward Algorithm: Part 2/2:
Discussions on philosophical ideas in Forward Forward paper, for eg, getting rid of gradient descent, optimizing neural nets in a single step with extreme weight updates, mortal computation.
[5] ComputerVision Talks: Asynchronous Perception Machine:
Explaining how APMs were a step towards getting Geoff Hinton’s GLOM to work well.
[6] MLCollective: The problems with scaling up:
Discussing how scaling up presents a big issue to existing neural nets.
System Design
The talks below are not of high quality, but merely for archival purposes. Better talks on these topics exist.
[1] YouTube Recommendation System
[2] Collaborative Filtering and Federated Averaging
[3] Tweet Generation System