References
S. Greydanus, M. Dzamba and J. Yosinski. Hamiltonian neural networks. Advances in neural information processing systems 32 (2019).
N. Gaby, F. Zhang and X. Ye. Lyapunov-Net: A Deep Neural Network Architecture for Lyapunov Function Approximation (2022), arXiv:2109.13359 [cs.LG].
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly and others. An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv:2010.11929 (2020).
A. Krizhevsky, I. Sutskever and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012).
M. Tan and Q. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning (PMLR, 2019); pp. 6105–6114.
K. Simonyan. Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556 (2014).
A. Trockman and J. Z. Kolter. Patches are all you need? arXiv preprint arXiv:2201.09792 (2022).
G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2017); pp. 4700–4708.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2015); pp. 1–9.
A. G. Howard. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, arXiv preprint arXiv:1704.04861 (2017).
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L.-C. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2018); pp. 4510–4520.
A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan and others. Searching for mobilenetv3. In: Proceedings of the IEEE/CVF international conference on computer vision (2019); pp. 1314–1324.
K. He, X. Zhang, S. Ren and J. Sun. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2016); pp. 770–778.
S. Xie, R. Girshick, P. Dollár, Z. Tu and K. He. Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2017); pp. 1492–1500.
F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally and K. Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size (2016), arXiv:1602.07360 [cs.CV].
S. Zagoruyko and N. Komodakis. Wide Residual Networks (2017), arXiv:1605.07146 [cs.CV].