{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,8]],"date-time":"2026-04-08T08:54:49Z","timestamp":1775638489615,"version":"3.50.1"},"reference-count":198,"publisher":"Association for Computing Machinery (ACM)","issue":"13s","license":[{"start":{"date-parts":[[2023,7,13]],"date-time":"2023-07-13T00:00:00Z","timestamp":1689206400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"crossref","award":["EP\/T022345\/1 (DiPET) and EP\/V02860X\/1 (RAPID)"],"award-info":[{"award-number":["EP\/T022345\/1 (DiPET) and EP\/V02860X\/1 (RAPID)"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001942","name":"CHIST-ERA","doi-asserted-by":"crossref","award":["CHIST-ERA-18-SDCDN-002 (DiPET)"],"award-info":[{"award-number":["CHIST-ERA-18-SDCDN-002 (DiPET)"]}],"id":[{"id":"10.13039\/501100001942","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Commission Horizon 2020","award":["101031148 (SoftNum)"],"award-info":[{"award-number":["101031148 (SoftNum)"]}]},{"DOI":"10.13039\/501100003661","name":"Korea Institute for Advancement of Technology","doi-asserted-by":"crossref","award":["P0017011"],"award-info":[{"award-number":["P0017011"]}],"id":[{"id":"10.13039\/501100003661","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2023,12,31]]},"abstract":"<jats:p>Convolutional neural networks (CNNs) are used in our daily life, including self-driving cars, virtual assistants, social network services, healthcare services, and face recognition, among others. However, deep CNNs demand substantial compute resources during training and inference. The machine learning community has mainly focused on model-level optimizations such as architectural compression of CNNs, whereas the system community has focused on implementation-level optimization. In between, various arithmetic-level optimization techniques have been proposed in the arithmetic community. This article provides a survey on resource-efficient CNN techniques in terms of model-, arithmetic-, and implementation-level techniques, and identifies the research gaps for resource-efficient CNN techniques across the three different level techniques. Our survey clarifies the influence from higher- to lower-level techniques based on our resource efficiency metric definition and discusses the future trend for resource-efficient CNN research.<\/jats:p>","DOI":"10.1145\/3587095","type":"journal-article","created":{"date-parts":[[2023,3,14]],"date-time":"2023-03-14T12:11:48Z","timestamp":1678795908000},"page":"1-36","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":28,"title":["Resource-Efficient Convolutional Networks: A Survey on Model-, Arithmetic-, and Implementation-Level Techniques"],"prefix":"10.1145","volume":"55","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1985-5116","authenticated-orcid":false,"given":"JunKyu","family":"Lee","sequence":"first","affiliation":[{"name":"Queen\u2019s University Belfast"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0119-4359","authenticated-orcid":false,"given":"Lev","family":"Mukhanov","sequence":"additional","affiliation":[{"name":"Queen\u2019s University Belfast and Queen Mary University of London"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3603-9401","authenticated-orcid":false,"given":"Amir Sabbagh","family":"Molahosseini","sequence":"additional","affiliation":[{"name":"Queen\u2019s University Belfast"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9702-3070","authenticated-orcid":false,"given":"Umar","family":"Minhas","sequence":"additional","affiliation":[{"name":"Queen\u2019s University Belfast"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5536-503X","authenticated-orcid":false,"given":"Yang","family":"Hua","sequence":"additional","affiliation":[{"name":"Queen\u2019s University Belfast"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9574-4138","authenticated-orcid":false,"given":"Jesus","family":"Martinez del Rincon","sequence":"additional","affiliation":[{"name":"Queen\u2019s University Belfast"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7817-5095","authenticated-orcid":false,"given":"Kiril","family":"Dichev","sequence":"additional","affiliation":[{"name":"University of Cambridge"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4730-950X","authenticated-orcid":false,"given":"Cheol-Ho","family":"Hong","sequence":"additional","affiliation":[{"name":"Chung-Ang University"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5868-9259","authenticated-orcid":false,"given":"Hans","family":"Vandierendonck","sequence":"additional","affiliation":[{"name":"Queen\u2019s University Belfast"}]}],"member":"320","published-online":{"date-parts":[[2023,7,13]]},"reference":[{"key":"e_1_3_1_2_2","unstructured":"Papers With Code. (n.d.) ImageNet Benchmark (Image Classification on ImageNet). Retrieved March 15 2023 from https:\/\/paperswithcode.com\/sota\/image-classification-on-imagenet."},{"key":"e_1_3_1_3_2","unstructured":"NVIDIA. (n.d.) NVIDIA Ampere Architecture White Paper. Retrieved March 15 2023 from https:\/\/images.nvidia.com\/aem-dam\/en-zz\/Solutions\/data-center\/nvidia-ampere-architecture-whitepaper.pdf."},{"key":"e_1_3_1_4_2","unstructured":"Google Cloud. (n.d.) Quantifying the Performance of the TPU Our First Machine Learning Chip. Retrieved March 15 2023 from https:\/\/cloud.google.com\/blog\/products\/gcp\/quantifying-the-performance-of-the-tpu-our-first-machine-learning-chip."},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/IEEESTD.2019.8766229"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.5555\/2207825"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/ARITH.2019.00023"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2018.2852335"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2018.00061"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2015.2474396"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2016.11"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSII.2019.2915822"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSI.2019.2945617"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCS48598.2019.9188213"},{"key":"e_1_3_1_15_2","volume-title":"Advances in Neural Information Processing Systems (NeurIPS\u201914)","author":"Ba Jimmy","year":"2014","unstructured":"Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems (NeurIPS\u201914)."},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/TBCAS.2011.2158314"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCAS.2008.4541446"},{"key":"e_1_3_1_18_2","first-page":"17","volume-title":"Proceedings of ICML Workshop on Unsupervised and Transfer Learning,","volume":"27","author":"Bengio Yoshua","year":"2012","unstructured":"Yoshua Bengio. 2012. Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning,Isabelle Guyon, Gideon Dror, Vincent Lemaire, Graham Taylor, and Daniel Silver (Eds.), Proceedings of Machine Learning Research, Vol. 27. PMLR, Bellevue, WA, 17\u201336. http:\/\/proceedings.mlr.press\/v27\/bengio12a.html."},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2014.2313565"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1137\/19M1289546"},{"key":"e_1_3_1_21_2","unstructured":"Alexey Bochkovskiy Chien-Yao Wang and Hong-Yuan Mark Liao. 2020. YOLOv4: Optimal speed and accuracy of object detection. arxiv:cs.CV\/2004.10934 (2020)."},{"key":"e_1_3_1_22_2","article-title":"EFloat: Entropy-coded floating point format for deep learning","author":"Bordawekar R.","year":"2021","unstructured":"R. Bordawekar, B. Abali, and M. H. Chen. 2021. EFloat: Entropy-coded floating point format for deep learning. arXiv:2102.02705 (2021).","journal-title":"arXiv:2102.02705"},{"key":"e_1_3_1_23_2","volume-title":"Advances in Neural Information Processing Systems 16","author":"Bottou L\u00e9on","year":"2004","unstructured":"L\u00e9on Bottou and Yann Le Cun. 2004. Large scale online learning. In Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA."},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/1150402.1150464"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/ARITH.2019.00022"},{"key":"e_1_3_1_26_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Cai Han","year":"2020","unstructured":"Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. 2020. Once-for-all: Train one network and specialize it for efficient deployment. In Proceedings of the International Conference on Learning Representations(ICLR\u201920)."},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.23919\/DATE.2019.8715262"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3316279.3316282"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/MCAS.2015.2484118"},{"key":"e_1_3_1_30_2","first-page":"742","volume-title":"Advances in Neural Information Processing Systems 30","author":"Chen Guobin","year":"2017","unstructured":"Guobin Chen, Wongun Choi, Xiang Yu, Tony Han, and Manmohan Chandraker. 2017. Learning efficient object detection models with knowledge distillation. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Red Hook, NY, 742\u2013751. http:\/\/papers.nips.cc\/paper\/6676-learning-efficient-object-detection-models-with-knowledge-distillation.pdf."},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/2644865.2541967"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001177"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00353"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2014.58"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSSC.2016.2616357"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1710.09282"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2017.2765695"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2016.13"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.195"},{"key":"e_1_3_1_40_2","doi-asserted-by":"crossref","first-page":"281","DOI":"10.1007\/978-3-319-11179-7_36","volume-title":"Artificial Neural Networks and Machine Learning\u2014ICANN 2014","author":"Cong Jason","year":"2014","unstructured":"Jason Cong and Bingjun Xiao. 2014. Minimizing computation in convolutional neural networks. In Artificial Neural Networks and Machine Learning\u2014ICANN 2014, Stefan Wermter, Cornelius Weber, W\u0142odzis\u0142aw Duch, Timo Honkela, Petia Koprinkova-Hristova, Sven Magg, G\u00fcnther Palm, and Alessandro E. P. Villa (Eds.). Springer International Publishing, Cham, Switzerland, 281\u2013290."},{"key":"e_1_3_1_41_2","volume-title":"NVIDIA Tesla V100 GPU Architecture","author":"Corporation NVIDIA","year":"2017","unstructured":"NVIDIA Corporation. 2017. NVIDIA Tesla V100 GPU Architecture. WP-08608-001v1.1. NVIDIA."},{"key":"e_1_3_1_42_2","first-page":"3123","volume-title":"Advances in Neural Information Processing Systems 28","author":"Courbariaux Matthieu","year":"2015","unstructured":"Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. BinaryConnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.). Curran Associates, Red Hook, NY, 3123\u20133131. http:\/\/papers.nips.cc\/paper\/5647-binaryconnect-training-deep-neural-networks-with-binary-weights-during-propagations.pdf."},{"key":"e_1_3_1_43_2","unstructured":"Matthieu Courbariaux Itay Hubara Daniel Soudry Ran El-Yaniv and Yoshua Bengio. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. arxiv:cs.LG\/1602.02830 (2016)."},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/MM.2018.112130359"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA52012.2021.00090"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2020.2976475"},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/DAC.2018.8465866"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.5555\/2968826.2968968"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.5555\/3326943.3326985"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1109\/MNNFS.1996.493808"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/3241539.3241559"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.7717\/peerj-cs.330"},{"key":"e_1_3_1_53_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Fox Sean","year":"2021","unstructured":"Sean Fox, Seyedramin Rasoulinezhad, Julian Faraone, David Boland, and Philip Leong. 2021. A block minifloat representation for training deep neural networks. In Proceedings of the International Conference on Learning Representations(ICLR\u201921)."},{"key":"e_1_3_1_54_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201919)","author":"Frankle Jonathan","year":"2019","unstructured":"Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In Proceedings of the International Conference on Learning Representations (ICLR\u201919)."},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2019.00023"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1145\/3093337.3037702"},{"key":"e_1_3_1_57_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201919)","author":"Gao Xitong","year":"2019","unstructured":"Xitong Gao, Yiren Zhao, \u0141ukasz Dudziak, Robert Mullins, and Cheng Zhong Xu. 2019. Dynamic channel pruning: Feature boosting and suppression. In Proceedings of the International Conference on Learning Representations (ICLR\u201919)."},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1992.4.1.1"},{"key":"e_1_3_1_59_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics11060945"},{"key":"e_1_3_1_60_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2018.00215"},{"key":"e_1_3_1_61_2","doi-asserted-by":"publisher","DOI":"10.1109\/IranianCEE.2015.7146404"},{"key":"e_1_3_1_62_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2017.00538"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2016.00333"},{"key":"e_1_3_1_64_2","doi-asserted-by":"publisher","DOI":"10.5555\/3157096.3157251"},{"key":"e_1_3_1_65_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-15-8377-3_11"},{"key":"e_1_3_1_66_2","doi-asserted-by":"publisher","DOI":"10.5555\/3045118.3045303"},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.14529\/jsfi170206"},{"key":"e_1_3_1_68_2","doi-asserted-by":"publisher","DOI":"10.1145\/3445814.3446749"},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2016.30"},{"key":"e_1_3_1_70_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201916)","author":"Han Song","year":"2016","unstructured":"Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. In Proceedings of the International Conference on Learning Representations (ICLR\u201916)."},{"key":"e_1_3_1_71_2","first-page":"1135","volume-title":"Advances in Neural Information Processing Systems 28","author":"Han Song","year":"2015","unstructured":"Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.). Curran Associates, Red Hook, NY, 1135\u20131143. http:\/\/papers.nips.cc\/paper\/5784-learning-both-weights-and-connections-for-efficient-neural-network.pdf."},{"key":"e_1_3_1_72_2","doi-asserted-by":"publisher","DOI":"10.23919\/DATE.2017.7927224"},{"key":"e_1_3_1_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO50266.2020.00040"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01234-2_48"},{"key":"e_1_3_1_76_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.155"},{"key":"e_1_3_1_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2018.00062"},{"key":"e_1_3_1_78_2","doi-asserted-by":"publisher","DOI":"10.5244\/C.29.160"},{"key":"e_1_3_1_79_2","unstructured":"Geoffrey Hinton Oriol Vinyals and Jeff Dean. 2015. Distilling the knowledge in a neural network. arxiv:stat.ML\/1503.02531 (2015)."},{"key":"e_1_3_1_80_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPEC.2017.8091072"},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_1_82_2","doi-asserted-by":"publisher","DOI":"10.1113\/jphysiol.1952.sp004764"},{"key":"e_1_3_1_83_2","first-page":"124","article-title":"Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks","volume":"22","author":"Hoefler Torsten","year":"2022","unstructured":"Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. 2022. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research 22, 1 (July 2022), Article 241, 124 pages.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_1_84_2","unstructured":"Andrew G. Howard Menglong Zhu Bo Chen Dmitry Kalenichenko Weijun Wang Tobias Weyand Marco Andreetto and Hartwig Adam. 2017. MobileNets: Efficient convolutional neural networks for mobile vision applications. arxiv:cs.CV\/1704.04861 (2017)."},{"key":"e_1_3_1_85_2","volume-title":"Proceedings of the NeurIPS 2021 Workshop on ImageNet: Past, Present and Future","author":"Hu Huiyi","year":"2021","unstructured":"Huiyi Hu, Ang Li, Daniele Calandriello, and Dilan Gorur. 2021. One pass ImageNet. In Proceedings of the NeurIPS 2021 Workshop on ImageNet: Past, Present and Future."},{"key":"e_1_3_1_86_2","unstructured":"Hengyuan Hu Rui Peng Yu-Wing Tai and Chi-Keung Tang. 2016. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arxiv:cs.NE\/1607.03250 (2016)."},{"key":"e_1_3_1_87_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00291"},{"key":"e_1_3_1_88_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.243"},{"key":"e_1_3_1_89_2","doi-asserted-by":"publisher","DOI":"10.5555\/3122009.3242044"},{"key":"e_1_3_1_90_2","unstructured":"Forrest N. Iandola Song Han Matthew W. Moskewicz Khalid Ashraf William J. Dally and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arxiv:cs.CV\/1602.07360 (2016)."},{"key":"e_1_3_1_91_2","first-page":"448","volume-title":"Proceedings of the 32nd International Conference on Machine Learning,","volume":"37","author":"Ioffe Sergey","year":"2015","unstructured":"Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning,Francis Bach and David Blei (Eds.). Proceedings of Machine Learning Research, Vol. 37. PMLR, Lille, France, 448\u2013456. http:\/\/proceedings.mlr.press\/v37\/ioffe15.html."},{"key":"e_1_3_1_92_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNN.2004.832719"},{"key":"e_1_3_1_93_2","doi-asserted-by":"publisher","DOI":"10.1145\/2463585.2463588"},{"key":"e_1_3_1_94_2","article-title":"Dissecting the Graphcore IPU architecture via microbenchmarking","author":"Jia Zhe","year":"2019","unstructured":"Zhe Jia, Blake Tillman, Marco Maggioni, and Daniele Paolo Scarpazza. 2019. Dissecting the Graphcore IPU architecture via microbenchmarking. arXiv preprint arXiv:1912.03413 (2019).","journal-title":"arXiv preprint arXiv:1912.03413"},{"key":"e_1_3_1_95_2","doi-asserted-by":"publisher","DOI":"10.1021\/nl904092h"},{"key":"e_1_3_1_96_2","article-title":"A study of BFLOAT16 for deep learning training","author":"Kalamkar Dhiraj","year":"2019","unstructured":"Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, et\u00a0al. 2019. A study of BFLOAT16 for deep learning training. arXiv preprint arXiv:1905.12322 (2019).","journal-title":"arXiv preprint arXiv:1905.12322"},{"key":"e_1_3_1_97_2","doi-asserted-by":"publisher","DOI":"10.1109\/MDAT.2017.2741463"},{"key":"e_1_3_1_98_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2016.41"},{"key":"e_1_3_1_99_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Kim Yong-Deok","year":"2016","unstructured":"Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. 2016. Compression of deep convolutional neural networks for fast and low power mobile applications. In Proceedings of the International Conference on Learning Representations(ICLR\u201916)."},{"key":"e_1_3_1_100_2","volume-title":"Advances in Neural Information Processing Systems","author":"K\u00f6ster Urs","year":"2017","unstructured":"Urs K\u00f6ster, Tristan Webb, Xin Wang, Marcel Nassar, Arjun K. Bansal, William Constable, Oguz Elibol, et\u00a0al. 2017. Flexpoint: An adaptive numerical format for efficient training of deep neural networks. In Advances in Neural Information Processing Systems. https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/a0160709701140704575d499c997b6ca-Paper.pdf."},{"key":"e_1_3_1_101_2","doi-asserted-by":"publisher","DOI":"10.1145\/3065386"},{"key":"e_1_3_1_102_2","doi-asserted-by":"publisher","DOI":"10.1088\/0957-4484\/24\/38\/382001"},{"key":"e_1_3_1_103_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.435"},{"key":"e_1_3_1_104_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1989.1.4.541"},{"key":"e_1_3_1_105_2","volume-title":"Advances in Neural Information Processing Systems 2","author":"LeCun Yann","year":"1990","unstructured":"Yann LeCun, John Denker, and Sara Solla. 1990. Optimal brain damage. In Advances in Neural Information Processing Systems 2, D. Touretzky (Ed.), Vol. 2. Morgan-Kaufmann."},{"key":"e_1_3_1_106_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.parco.2020.102663"},{"key":"e_1_3_1_107_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSP.2021.3086355"},{"key":"e_1_3_1_108_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICFEC51620.2021.00015"},{"key":"e_1_3_1_109_2","volume-title":"Proceedings of the Workshop on Efficient Methods for Deep Neural Networks in the 30th International Conference on Neural Information Processing Systems (NeurIPS\u201916)","author":"Li Fengfu","year":"2016","unstructured":"Fengfu Li, Bo Zhang, and Bin Liu. 2016. Ternary weight networks. In Proceedings of the Workshop on Efficient Methods for Deep Neural Networks in the 30th International Conference on Neural Information Processing Systems (NeurIPS\u201916)."},{"key":"e_1_3_1_110_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Li Hao","year":"2017","unstructured":"Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning filters for efficient ConvNets. In Proceedings of the International Conference on Learning Representations(ICLR\u201917)."},{"key":"e_1_3_1_111_2","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2019.2924215"},{"key":"e_1_3_1_112_2","doi-asserted-by":"publisher","DOI":"10.1145\/3123939.3123977"},{"key":"e_1_3_1_113_2","doi-asserted-by":"publisher","DOI":"10.5555\/3294771.3294979"},{"key":"e_1_3_1_114_2","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001179"},{"key":"e_1_3_1_115_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.298"},{"key":"e_1_3_1_116_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58529-7_4"},{"key":"e_1_3_1_117_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW53098.2021.00347"},{"key":"e_1_3_1_118_2","doi-asserted-by":"publisher","DOI":"10.1109\/FCCM.2017.64"},{"key":"e_1_3_1_119_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.541"},{"key":"e_1_3_1_120_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"e_1_3_1_121_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Mariet Zelda","year":"2016","unstructured":"Zelda Mariet and Suvrit Sra. 2016. Diversity networks: Neural network compression using determinantal point processes. In Proceedings of the International Conference on Learning Representations(ICLR\u201916)."},{"key":"e_1_3_1_122_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1312.5851"},{"key":"e_1_3_1_123_2","unstructured":"Yoshitomo Matsubara Marco Levorato and Francesco Restuccia. 2021. Split computing and early exiting for deep learning applications: Survey and research challenges. arxiv:eess.SP\/2103.04505 (2021)."},{"key":"e_1_3_1_124_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF02478259"},{"key":"e_1_3_1_125_2","doi-asserted-by":"publisher","DOI":"10.1145\/977091.977115"},{"key":"e_1_3_1_126_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Micikevicius Paulius","year":"2018","unstructured":"Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich K. Elsen, David Garcia, Boris Ginsburg, et\u00a0al. 2018. Mixed precision training. In Proceedings of the International Conference on Learning Representations(ICLR\u201918)."},{"key":"e_1_3_1_127_2","unstructured":"Michael A. Nielsen. 2018. Neural Networks and Deep Learning. Retrieved March 15 2023 from http:\/\/neuralnetworksanddeeplearning.com\/."},{"key":"e_1_3_1_128_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSSC.2013.2259038"},{"key":"e_1_3_1_129_2","doi-asserted-by":"publisher","DOI":"10.5555\/2971808.2971918"},{"key":"e_1_3_1_130_2","doi-asserted-by":"publisher","DOI":"10.1145\/3079856.3080254"},{"key":"e_1_3_1_131_2","volume-title":"Computer Arithmetic: Algorithms and Hardware Designs","author":"Parhami Behrooz","year":"2010","unstructured":"Behrooz Parhami. 2010. Computer Arithmetic: Algorithms and Hardware Designs. Oxford University Press, New York, NY."},{"key":"e_1_3_1_132_2","doi-asserted-by":"publisher","DOI":"10.1109\/BioCAS.2014.6981816"},{"key":"e_1_3_1_133_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2018.8451355"},{"key":"e_1_3_1_134_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2019.00753"},{"key":"e_1_3_1_135_2","doi-asserted-by":"crossref","first-page":"525","DOI":"10.1007\/978-3-319-46493-0_32","volume-title":"Computer Vision\u2014ECCV 2016","author":"Rastegari Mohammad","year":"2016","unstructured":"Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. XNOR-Net: ImageNet classification using binary convolutional neural networks. In Computer Vision\u2014ECCV 2016, Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer International Publishing, Cham, Switzerland, 525\u2013542."},{"key":"e_1_3_1_136_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW53098.2021.00518"},{"key":"e_1_3_1_137_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2016.7783721"},{"key":"e_1_3_1_138_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2018.00017"},{"key":"e_1_3_1_139_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Romero Adriana","year":"2015","unstructured":"Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2015. FitNets: Hints for thin deep nets. In Proceedings of the International Conference on Learning Representations(ICLR\u201915)."},{"key":"e_1_3_1_140_2","doi-asserted-by":"publisher","DOI":"10.1037\/h0042519"},{"key":"e_1_3_1_141_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICRC.2018.8638592"},{"key":"e_1_3_1_142_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSI.2019.2951083"},{"key":"e_1_3_1_143_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00474"},{"key":"e_1_3_1_144_2","first-page":"6056","volume-title":"Advances in Neural Information Processing Systems 32","author":"Scheidegger Florian","year":"2019","unstructured":"Florian Scheidegger, Luca Benini, Costas Bekas, and A. Cristiano I. Malossi. 2019. Constrained deep neural network architecture search for IoT devices accounting for hardware calibration. In Advances in Neural Information Processing Systems 32. 6056\u20136066."},{"key":"e_1_3_1_145_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCAS.2012.6272131"},{"key":"e_1_3_1_146_2","article-title":"A survey of neuromorphic computing and neural networks in hardware","author":"Schuman Catherine D.","year":"2017","unstructured":"Catherine D. Schuman, Thomas E. Potok, Robert M. Patton, J. Douglas Birdwell, Mark E. Dean, Garrett S. Rose, and James S. Plank. 2017. A survey of neuromorphic computing and neural networks in hardware. arXiv preprint arXiv:1705.06963 (2017).","journal-title":"arXiv preprint arXiv:1705.06963"},{"key":"e_1_3_1_147_2","doi-asserted-by":"publisher","DOI":"10.1145\/3007787.3001139"},{"key":"e_1_3_1_148_2","doi-asserted-by":"publisher","DOI":"10.1145\/503048.503072"},{"key":"e_1_3_1_149_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2018.00069"},{"key":"e_1_3_1_150_2","doi-asserted-by":"publisher","DOI":"10.1109\/TBME.2003.820390"},{"key":"e_1_3_1_151_2","doi-asserted-by":"publisher","DOI":"10.1109\/82.775396"},{"key":"e_1_3_1_152_2","first-page":"173","volume-title":"Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NeurIPS) 2000, Denver, CO, USA","author":"Simoni Mario F.","year":"2000","unstructured":"Mario F. Simoni, Gennady S. Cymbalyuk, Michael Elliott Sorensen, Ronald L. Calabrese, and Stephen P. DeWeerth. 2000. Development of hybrid systems: Interfacing a silicon neuron to a leech heart interneuron. In Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NeurIPS) 2000, Denver, CO, USA, Todd K. Leen, Thomas G. Dietterich, and Volker Tresp (Eds.). MIT Press, Cambridge, MA, 173\u2013179."},{"key":"e_1_3_1_153_2","doi-asserted-by":"publisher","DOI":"10.5244\/C.29.31"},{"key":"e_1_3_1_154_2","first-page":"4900","volume-title":"Advances in Neural Information Processing Systems 32","author":"Sun Xiao","year":"2019","unstructured":"Xiao Sun, Jungwook Choi, Chia-Yu Chen, Naigang Wang, Swagath Venkataramani, Vijayalakshmi (Viji) Srinivasan, Xiaodong Cui, Wei Zhang, and Kailash Gopalakrishnan. 2019. Hybrid 8-bit floating point (HFP8) training and inference for deep neural networks. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d\u2019Alch\u00e9-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Red Hook, NY, 4900\u20134909. http:\/\/papers.nips.cc\/paper\/8736-hybrid-8-bit-floating-point-hfp8-training-and-inference-for-deep-neural-networks.pdf."},{"key":"e_1_3_1_155_2","first-page":"1796","volume-title":"Advances in Neural Information Processing Systems 33","author":"Sun Xiao","year":"2020","unstructured":"Xiao Sun, Naigang Wang, Chia-Yu Chen, Jiamin Ni, Ankur Agrawal, Swagath Venkataramani Xiaodong, Cui, Kaoutar El Maghraoui, Vijayalakshmi Viji Srinivasan, and Kailash Gopalakrishnan. 2020. Ultra-low precision 4-bit training of deep neural networks. In Advances in Neural Information Processing Systems 33. 1796\u20131807."},{"key":"e_1_3_1_156_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2017.2761740"},{"key":"e_1_3_1_157_2","doi-asserted-by":"publisher","DOI":"10.1109\/DAC18072.2020.9218516"},{"key":"e_1_3_1_158_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00293"},{"key":"e_1_3_1_159_2","unstructured":"Mingxing Tan and Quoc Le. 2019. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). Proceedings of Machine Learning Research Vol. 97. PMLR Long Beach CA 6105\u20136114. http:\/\/proceedings.mlr.press\/v97\/tan19a.html."},{"key":"e_1_3_1_160_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"e_1_3_1_161_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR.2016.7900006"},{"key":"e_1_3_1_162_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDCS.2017.226"},{"key":"e_1_3_1_163_2","doi-asserted-by":"publisher","DOI":"10.1145\/3359983"},{"key":"e_1_3_1_164_2","volume-title":"Proceedings of the Deep Learning and Unsupervised Feature Learning Workshop (NeurIPS\u201911)","author":"Vanhoucke Vincent","year":"2011","unstructured":"Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. 2011. Improving the speed of neural networks on CPUs. In Proceedings of the Deep Learning and Unsupervised Feature Learning Workshop (NeurIPS\u201911)."},{"key":"e_1_3_1_165_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Vasilache Nicolas","year":"2015","unstructured":"Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun. 2015. Fast convolutional nets with fbfft: A GPU performance evaluation. In Proceedings of the International Conference on Learning Representations(ICLR\u201915)."},{"key":"e_1_3_1_166_2","doi-asserted-by":"publisher","DOI":"10.1145\/3240765.3240803"},{"key":"e_1_3_1_167_2","doi-asserted-by":"publisher","DOI":"10.1145\/3309551"},{"key":"e_1_3_1_168_2","first-page":"7675","volume-title":"Advances in Neural Information Processing Systems 31","author":"Wang Naigang","year":"2018","unstructured":"Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, and Kailash Gopalakrishnan. 2018. Training deep neural networks with 8-bit floating point numbers. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). Curran Associates, Red Hook, NY, 7675\u20137684. http:\/\/papers.nips.cc\/paper\/7994-training-deep-neural-networks-with-8-bit-floating-point-numbers.pdf."},{"key":"e_1_3_1_169_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.3007749"},{"key":"e_1_3_1_170_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.58337"},{"key":"e_1_3_1_171_2","doi-asserted-by":"publisher","DOI":"10.5555\/59657"},{"key":"e_1_3_1_172_2","doi-asserted-by":"publisher","DOI":"10.1145\/1498765.1498785"},{"key":"e_1_3_1_173_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0006-3495(72)86068-5"},{"key":"e_1_3_1_174_2","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611970364"},{"key":"e_1_3_1_175_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.01099"},{"key":"e_1_3_1_176_2","unstructured":"Hao Wu Patrick Judd Xiaojie Zhang Mikhail Isaev and Paulius Micikevicius. 2020. Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv:cs.LG\/2004.09602 (2020)."},{"key":"e_1_3_1_177_2","article-title":"Training and inference with integers in deep neural networks","author":"Wu Shuang","year":"2018","unstructured":"Shuang Wu, Guoqi Li, Feng Chen, and Luping Shi. 2018. Training and inference with integers in deep neural networks. arXiv preprint arXiv:1802.04680 (2018).","journal-title":"arXiv preprint arXiv:1802.04680"},{"key":"e_1_3_1_178_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.634"},{"key":"e_1_3_1_179_2","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA47549.2020.00033"},{"key":"e_1_3_1_180_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00748"},{"key":"e_1_3_1_181_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.643"},{"key":"e_1_3_1_182_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01249-6_18"},{"key":"e_1_3_1_183_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2019.12.027"},{"key":"e_1_3_1_184_2","volume-title":"Proceedings of the 37th International Conference on Machine Learning (ICML\u201920)","author":"Yang Zitong","year":"2020","unstructured":"Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, and Yi Ma. 2020. Rethinking bias-variance trade-off for generalization of neural networks. In Proceedings of the 37th International Conference on Machine Learning (ICML\u201920). Article 998, 11 pages."},{"key":"e_1_3_1_185_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2018.00071"},{"issue":"6","key":"e_1_3_1_186_2","first-page":"1733","article-title":"XNOR-SRAM: In-memory computing SRAM macro for binary\/ternary deep neural networks","volume":"55","author":"Yin S.","year":"2020","unstructured":"S. Yin, Z. Jiang, J. Seo, and M. Seok. 2020. XNOR-SRAM: In-memory computing SRAM macro for binary\/ternary deep neural networks. IEEE Journal of Solid-State Circuits 55, 6 (2020), 1733\u20131743.","journal-title":"IEEE Journal of Solid-State Circuits"},{"key":"e_1_3_1_187_2","article-title":"Image classification at supercomputer scale","author":"Ying Chris","year":"2018","unstructured":"Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. 2018. Image classification at supercomputer scale. arXiv preprint arXiv:1811.06992 (2018).","journal-title":"arXiv preprint arXiv:1811.06992"},{"key":"e_1_3_1_188_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Yu Jiahui","year":"2019","unstructured":"Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. 2019. Slimmable neural networks. In Proceedings of the International Conference on Learning Representations(ICLR\u201919)."},{"key":"e_1_3_1_189_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00958"},{"key":"e_1_3_1_190_2","doi-asserted-by":"crossref","first-page":"818","DOI":"10.1007\/978-3-319-10590-1_53","volume-title":"Computer Vision\u2014ECCV 2014","author":"Zeiler Matthew D.","year":"2014","unstructured":"Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Computer Vision\u2014ECCV 2014, David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (Eds.). Springer International Publishing, Cham, Switzerland, 818\u2013833."},{"key":"e_1_3_1_191_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2019.2904897"},{"key":"e_1_3_1_192_2","doi-asserted-by":"publisher","DOI":"10.1145\/3020078.3021727"},{"key":"e_1_3_1_193_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00240"},{"key":"e_1_3_1_194_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00716"},{"key":"e_1_3_1_195_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA52012.2021.00061"},{"key":"e_1_3_1_196_2","unstructured":"Shuchang Zhou Yuxin Wu Zekun Ni Xinyu Zhou He Wen and Yuheng Zou. 2018. DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv:cs.NE\/1606.06160."},{"key":"e_1_3_1_197_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Zhu Chenzhuo","year":"2017","unstructured":"Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. 2017. Trained ternary quantization. In Proceedings of the International Conference on Learning Representations(ICLR\u201917)."},{"key":"e_1_3_1_198_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00204"},{"key":"e_1_3_1_199_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Zoph Barret","year":"2017","unstructured":"Barret Zoph and Quoc Le. 2017. Neural architecture search with reinforcement learning. In Proceedings of the International Conference on Learning Representations(ICLR\u201917)."}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3587095","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3587095","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T18:08:01Z","timestamp":1750183681000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3587095"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,13]]},"references-count":198,"journal-issue":{"issue":"13s","published-print":{"date-parts":[[2023,12,31]]}},"alternative-id":["10.1145\/3587095"],"URL":"https:\/\/doi.org\/10.1145\/3587095","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,13]]},"assertion":[{"value":"2021-12-22","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-02-23","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-07-13","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}