Multidisciplinary design method for product quality based on ResNet50 network

  • Guodong Yi ,
  • Lifang Yi,
  • Yanwu Feng
  • a,b,c  State Key Laboratory of Fluid Power & Mechatronic Systems, Zhejiang University, Hangzhou 310027, China
Cite as
Yi G., Yi L., Feng Y. (2021). Multidisciplinary design method for product quality based on ResNet50 network. Proceedings of the 33rd European Modeling & Simulation Symposium (EMSS 2021), pp. 281-288. DOI: https://doi.org/10.46354/i3m.2021.emss.039

Abstract

The positioning accuracy of the PCB during processing depends on the quality of the MARK point images. The collection of MARK point images is affected by factors such as background, illumination etc., so the classification of images is the key to improve the accuracy of PCB positioning. In this paper, a multidisciplinary design modelling method for product quality is proposed. A classification model through transfer learning based on the ResNet50 network and weights is built. It is verified by a set of customized experiment data that the accuracy of MARK point image classification has reached 98.53%. Compared with traditional classification methods, the accuracy rate of this method is 20% higher, and is more suitable for custom small data sets. It provides a guarantee for the subsequent classification and segmentation of MARK point images with different characteristics.

References

  1. Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., … He, K. (2017, June 8). Accurate, large minibatch SGD: Training imagenet in 1 hour. ArXiv. arXiv. Retrieved from http://www.caffe2.ai
  2. He, K., Zhang, X., Ren, S., & Sun, J. (n.d.). Deep Residual Learning for Image Recognition. Retrieved from http://image-net.org/challenges/LSVRC/2015/
  3. He, K., Zhang, X., Ren, S., & Sun, J. (2014). Spatial pyramid pooling in deep convolutional networks for visual recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8691 LNCS, pp. 346–361). Springer Verlag. https://doi.org/10.1007/978-3-319-10578-9_23
  4. Hoffer, E., Hubara, I., & Soudry, D. (2017). Train longer, generalize better: Closing the generalization gap in large batch training of neural networks. Advances in Neural Information Processing Systems, 2017–Decem(June), 1732–1742.
  5. Keskar, N. S., Nocedal, J., Tang, P. T. P., Mudigere, D., & Smelyanskiy, M. (2017). On large-batch training for deep learning: Generalization gap and sharp minima. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 1–16.
  6. Keskar, N. S., & Socher, R. (2017). Improving generalization performance by switching from ADAM to SGD. ArXiv, (1).
  7. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems (Vol. 25). Retrieved from http://code.google.com/p/cuda-convnet/
  8. Reddi, S. J., Kale, S., & Kumar, S. (2019). On the convergence of adam and beyond. ArXiv, 1–23.
  9. Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 1–14.
  10. Smith, S. L., Kindermans, P. J., Ying, C., & Le, Q. V. (2017). Don’t decay the learning rate, increase the batch size. ArXiv, (2017), 1–11.
  11. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., … Rabinovich, A. (2015). Going deeper with convolutions. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 07–12–June, 1–9. https://doi.org/10.1109/CVPR.2015.7298594
  12. Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., & Liu, C. (2018). A survey on deep transfer learning. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11141 LNCS, 270–279. https://doi.org/10.1007/978-3-030-01424-7_27
  13. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? Advances in Neural Information Processing Systems, 4(January), 3320–3328.
  14. Peng, G., Xiong, C., Xia, C., et al. (2018). Yi zhong ji yu MARK dian de dian jiao ji qi ren shi ju ding wei fang fa [A method of vision target localization for dispensing robot based on mark point] [J]. CAAI Transactions on Intelligent Systems, 5, 728-733.
  15. Peng, X. (2013). Fu za chang jing gao wei kuan tu xiang dui bi zeng qiang chu li yan jiu [The Research of Contrast Enhancement Processing on Wide-bits Image in Complex Scenes]. (Doctoral dissertation, University of Chinese Academy of Sciences, 2013).
  16. Wu, Z. (2019). High-precision positioning of circular positioning points on defective printed circuit boards under uneven illumination. (Doctoral dissertation, University of Chinese Academy of Sciences, 2019). 
  17. Xu, S., Chen, S. (2018). Ji yu shen du xue xi de tu xiang fen lei fang fa [Image classification method based on deep learning]. Application of Electronic Technique, 480, 122-125.
  18. Zhang, X., Liu, H., Gu, W., et al. (2018). Quan qiu guang ke ji fa zhan gai kuang yi ji guang ke ji zhuang bei guo chan hua. [A survey on the development of global lithography machines and the localization of lithography equipment]. Wireless Internet Technology, 15, 110-111.