The structure module is stated to use a "3-d equivariant transformer architecture" (John Jumper et al. (1 December 2020), AlphaFold2 presentation, slide 12).
One design for a transformer network with SE(3)-equivariance was proposed in Fabian Fuchs et alSE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks, NeurIPS 2020; also website. It is not known how similar this may or may not be to what was used in AlphaFold.
See also the blog post by AlQuaraishi on this.
The structure module is stated to use a "3-d equivariant transformer architecture" (John Jumper et al. (1 December 2020), AlphaFold2 presentation, slide 12).
One design for a transformer network with SE(3)-equivariance was proposed in Fabian Fuchs et alSE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks, NeurIPS 2020; also website. It is not known how similar this may or may not be to what was used in AlphaFold.
See also the blog post by AlQuaraishi on this.
The structure module is stated to use a "3-d equivariant transformer architecture" (John Jumper et al. (1 December 2020), AlphaFold2 presentation, slide 12).
One design for a transformer network with SE(3)-equivariance was proposed in Fabian Fuchs et alSE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks, NeurIPS 2020; also website. It is not known how similar this may or may not be to what was used in AlphaFold.
See also the blog post by AlQuaraishi on this.
See block diagram. Also John Jumper et al. (1 December 2020), AlphaFold2 presentation, slide 10
The structure module is stated to use a "3-d equivariant transformer architecture" (John Jumper et al. (1 December 2020), AlphaFold2 presentation, slide 12).
One design for a transformer network with SE(3)-equivariance was proposed in Fabian Fuchs et alSE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks, NeurIPS 2020; also website. It is not known how similar this may or may not be to what was used in AlphaFold.
See also the blog post by AlQuaraishi on this.