Implementation of 14 bits floating point numbers of calculating units for neural network hardware development / I. V. Zoev [et al.]

Уровень набора: (RuTPU)RU\TPU\network\2008, IOP Conference Series: Materials Science and EngineeringАльтернативный автор-лицо: Zoev, I. V., Specialist in the field of informatics and computer technology, Programmer of Tomsk Polytechnic University, 1993-, Ivan Vladimirovich;Beresnev, A. P.;Mytsko, E. A., specialist in the field of informatics and computer technology, Programmer of Tomsk Polytechnic University, 1991-, Evgeniy Aleksandrovich;Malchukov, A. N., specialist in the field of informatics and computer technology, Associate Professor of Tomsk Polytechnic University, Candidate of technical sciences, 1982-, Andrey NikolaevichКоллективный автор (вторичный): Национальный исследовательский Томский политехнический университет (ТПУ), Институт кибернетики (ИК)Язык: английский.Серия: Information technologies in Mechanical EngineeringРезюме или реферат: An important aspect of modern automation is machine learning. Specifically, neural networks are used for environment analysis and decision making based on available data. This article covers the most frequently performed operations on floating-point numbers in artificial neural networks. Also, a selection of the optimum value of the bit to 14-bit floating-point numbers for implementation on FPGAs was submitted based on the modern architecture of integrated circuits. The description of the floating-point multiplication (multiplier) algorithm was presented. In addition, features of the addition (adder) and subtraction (subtractor) operations were described in the article. Furthermore, operations for such variety of neural networks as a convolution network - mathematical comparison of a floating point ('less than' and 'greater than or equal') were presented. In conclusion, the comparison with calculating units of Atlera was made..Примечания о наличии в документе библиографии/указателя: [References: 10 tit.].Тематика: электронный ресурс | труды учёных ТПУ | числа с плавающей точкой | вычислительные устройства | аппаратные средства | нейронные сети | машинное обучение | искусственные нейронные сети | сверточные сети Ресурсы он-лайн:Щелкните здесь для доступа в онлайн | Щелкните здесь для доступа в онлайн
Тэги из этой библиотеки: Нет тэгов из этой библиотеки для этого заглавия. Авторизуйтесь, чтобы добавить теги.
Оценка
    Средний рейтинг: 0.0 (0 голосов)
Нет реальных экземпляров для этой записи

Title screen

[References: 10 tit.]

An important aspect of modern automation is machine learning. Specifically, neural networks are used for environment analysis and decision making based on available data. This article covers the most frequently performed operations on floating-point numbers in artificial neural networks. Also, a selection of the optimum value of the bit to 14-bit floating-point numbers for implementation on FPGAs was submitted based on the modern architecture of integrated circuits. The description of the floating-point multiplication (multiplier) algorithm was presented. In addition, features of the addition (adder) and subtraction (subtractor) operations were described in the article. Furthermore, operations for such variety of neural networks as a convolution network - mathematical comparison of a floating point ('less than' and 'greater than or equal') were presented. In conclusion, the comparison with calculating units of Atlera was made.

Для данного заглавия нет комментариев.

оставить комментарий.