Quicklists
Javascript must be enabled

Xiang Cheng : Transformers learn in-context by (functional) gradient descent

Motivated by the in-context learning phenomenon, we investigate how the Transformer neural network can implement learning algorithms in its forward pass. We show that a linear Transformer naturally learns to implement gradient descent, which enables it to learn linear functions in-context. More generally, we show that a non-linear Transformer can implement functional gradient descent with respect to some RKHS metric, which allows it to learn a broad class of functions in-context. Additionally, we show that the RKHS metric is determined by the choice of attention activation, and that the optimal choice of attention activation depends in a natural way on the class of functions that need to be learned. I will end by discussing some implications of our results for the choice and design of Transformer architectures.

Please select playlist name from following

Report Video

Please select the category that most closely reflects your concern about the video, so that we can review it and determine whether it violates our Community Guidelines or isn’t appropriate for all viewers. Abusing this feature is also a violation of the Community Guidelines, so don’t do it.

0 Comments

Comments Disabled For This Video