Open Access
Int. J. Simul. Multidisci. Des. Optim.
Volume 15, 2024
Article Number 9
Number of page(s) 9
Published online 12 April 2024
  1. A. Sarkar, A. Banerjee, P.K. Singh, R. Sarkar, 3D human action recognition: through the eyes of researchers, Exp. Syst. Appl. 193, 1164–1181 (2022) [Google Scholar]
  2. Y. Liu, R. Ma, H. Li, C. Wang, Y. Tao, D. Giovanni, RGB-D human action recognition of deep feature enhancement and fusion using two-stream ConvNet, J. Sens. 202, 886–895 (2021) [Google Scholar]
  3. Z. Cui, K. Henrickson, R. Ke, Y. Wang, Traffic graph convolutional recurrent neural network: a deep learning framework for network-scale traffic learning and forecasting, IEEE Trans. Intell. Transp. Syst. 21, 4883–4894 (2020) [Google Scholar]
  4. X. Yang, Q. Zhu, P. Li, P. Chen, Q. Niu, Fine-grained predicting urban crowd flows with adaptive spatiotemporal graph convolutional network, Neurocomputing 446, 95–105 (2021) [Google Scholar]
  5. H. Zhou, D. Ren, H. Xia, M. Fan, X. Yang, H. Huang, AST-GNN: an attention-based spatiotemporal graph neural network for interaction-aware pedestrian trajectory prediction, Neurocomputing 445, 298–308 (2021) [Google Scholar]
  6. X. Zhang, G. Chen, An automatic insect recognition algorithm in complex background based on convolution neural network, Traitement du Signal 37, 793–798 (2020) [Google Scholar]
  7. Y. Yang, A vehicle recognition algorithm based on deep convolution neural network, Traitement du Signal 37, 647–653 (2020) [Google Scholar]
  8. X. Song, S. Gao, C. Chen, S. Wang, A novel face recognition algorithm for imbalanced small samples, Traitement du Signal 37, 425–432 (2020) [Google Scholar]
  9. A. Gharahdaghi, F. Razzazi, A. Amini, A non-linear mapping representing human action recognition under missing modality problem in video data, Measurement 186, 1101−1109 (2021) [Google Scholar]
  10. B. Sun, D. Kong, S. Wang, L. Wang, B. Yin, Joint transferable dictionary learning and view adaptation for multi-view human action recognition, ACM Trans. Knowl Discov. Data 15, 32–56 (2021) [Google Scholar]
  11. W. Chen, L. Liu, G. Lin, Y. Chen, J. Wang, Class structure‐aware adversarial loss for cross‐domain human action recognition, IET Image Process. 15, 3425–3432 (2021) [Google Scholar]
  12. L. Liu, L. Yang, W. Chen, X. Gao, Dual view 3D human pose estimation without camera parameters for action recognition, IET Image Process. 15, 3433–3440 (2021) [Google Scholar]
  13. Y. Li, X. Xu, J. Xu, E. Du, Bilayer model for cross-view human action recognition based on transfer learning, J. Electr. Imag. 28, 1–14 (2019) [Google Scholar]
  14. Z. Tu, H. Li, D. Zhang, J. Dauwel, B. Li, J. Yuan, Action-stage emphasized spatiotemporal VLAD for video action recognition, IEEE Trans. Image Process. 28, 2799–2812 (2019) [Google Scholar]
  15. W. Xu, M. Wu, J. Zhu, M. Zhao, Multi-scale skeleton adaptive weighted GCN for skeleton-based human action recognition in IoT, Appl. Soft Comput. 104, 1568–1579 (2021) [Google Scholar]
  16. H.B. Naeem, F. Murtaza, M.H. Yousaf, S. Velastin, T-VLAD: temporal vector of locally aggregated descriptor for multiview human action recognition, Pattern Recogn. Lett. 148, 22–28 (2021) [Google Scholar]
  17. W. Peng, J. Shi, T. Varanka, G. Zhao, Rethinking the ST-GCNs for 3D Skeleton-based Human Action Recognition. Neurocomputing, 2021, 454 (8): 45–53. [Google Scholar]
  18. F. Li, A. Zhu, Z. Liu, Y. Huo, Y. Xu, G. Hua, Pyramidal graph convolutional network for skeleton-based human action recognition, IEEE Sens. J. 21, 16183–16191 (2021) [Google Scholar]
  19. X. Ji, Q. Zhao, J. Cheng, C. Ma, Exploiting spatiotemporal representation for 3D human action recognition from depth map sequences, Knowl. Based Syst. 227, 1057–1069 (2021) [Google Scholar]
  20. Y. Lei, Research on micro video character perception and recognition based on target detection technology, J. Comput. Cogn. Eng. 1, 83–87 (2022) [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.