1. McCulloch W. S., Pitts W. A logical calculus of the ideas immanent in nervous activity // Bulletin of Mathematical Biophysics. 1943. Vol. 5. No. 4. P. 115–133. doi: https://doi.org/10.1007/BF02478259
2. Bebis G., Georgiopoulos M. Feed-forward neural networks // IEEE Potentials. 1994. Vol. 13. Iss. 4. P. 27–31. doi: https://doi.org/10.1109/45.329294
3. Sutskever I., Vinyals O., Quoc V. L. Sequence to sequence learning with neural net-works // Advances in Neural Information Processing Systems Conference 27: [web] / NeurIPS Proceedings. 2014. URL: https://proceedings.neurips.cc/paper/2014/file/a14ac55a4f27472c5d894ec1c3c743d2-Paper.pdf (дата обращения: 08.02.2022).
4. Gers F. A., Schmidhuber J., Cummins F. Learning to forget: Continual prediction with LSTM // Neural Computation. 2000. Vol. 12. Iss. 10. P. 2451–2471. doi: https://doi.org/10.1162/089976600300015015
5. Trappenberg T. P. Machine learning with sklearn // Fundamentals of Machine Learning. Oxford: Oxford University Press, 2019. P. 38–65.
6. Gulli A., Pal S. Deep learning with Keras: Implement neural networks with Keras on Theano and TensorFlow. Birmingham: Packt Publishing, 2017. 318 p.
7. Sial A. H., Rashdi S. Y. S., Khan A. H. Comparative analysis of data visualization libra-ries Matplotlib and Seaborn in Python // International Journal of Advanced Trends in Computer Science and Engineering. 2021. Vol. 10. No. 1. P. 277–281.
8. De Boer P.-T., Kroese D. P., Mannor S., Rubinstein R. Y. A tutorial on the cross-entropy method // Ann. Oper. Res. 2005. Vol. 134. Iss. 1. P. 19–67. doi: https://doi.org/10.1007/s10479-005-5724-z
9. Kingma D. P., Ba J. Adam: a method for stochastic optimization: preprint // arXiv.org: [web]. 2014. URL: https://arxiv.org/abs/1412.6980v1 (дата обращения: 08.02.2022).
10. Guyon I. A scaling law for the validation-set training-set size ratio // AT&T; Bell Labor-atories Technical Journal. 1997. Vol. 1. P. 1–11.