000 | 04015nlm1a2200517 4500 | ||
---|---|---|---|
001 | 661768 | ||
005 | 20231030041758.0 | ||
035 | _a(RuTPU)RU\TPU\network\32640 | ||
035 | _aRU\TPU\network\29971 | ||
090 | _a661768 | ||
100 | _a20200214a2019 k y0engy50 ba | ||
101 | 0 | _aeng | |
135 | _adrcn ---uucaa | ||
181 | 0 | _ai | |
182 | 0 | _ab | |
200 | 1 |
_aDevelopment of the video stream object detection algorithm (VSODA) with tracking _fA. Yu. Zarnitsyn, A. S. Volkov, A. A. Voytsekhovskiy, B. I. Pyakullya |
|
203 |
_aText _celectronic |
||
300 | _aTitle screen | ||
320 | _a[References: 9 tit.] | ||
330 | _aThe object tracking is one of the most important task in video analysis. Many methods have been proposed such as TLD (Tracking, Learning, Detection), Meanshift and MIL but they show good accuracy in laboratory cases, not in real ones, where the accuracy is defined as a numerical difference between computed object coordinates and the real ones. One of the reasons is lack of information about tracked object and environment changes. If a method has the prior information about tracked object, then it will be able to perform with higher accuracy. Some of the newest object tracking methods such as GOTURN use trained CNN (convolutional neural network) and have better accuracy because of knowledge about how the tracked object looks like in different situations such as light intensity changes and tracked object’s rotations. If we use only a classification algorithm (classifier) then it can find an object that was in training set with high probability. But if its appearance is changing it will be lost when deviation will be higher than trust limit. Then it is important to have parts of prior and posterior information about tracked object. The prior information is given by detector (CNN) and posterior information – by tracking algorithm (TLD). One of the biggest detector problems is high computational complexity in terms of operations’ number and one of the solutions is to use the classifier in parallel with the tracker. In future work we are going to use different sensors, not only RGB camera, but RGBD camera, which may improve accuracy due to higher amount of information. | ||
461 | _tEAI Endorsed Transactions on Energy Web | ||
463 |
_tVol. 19, iss. 22 _v[e1, 5 p.] _d2019 |
||
610 | 1 | _aэлектронный ресурс | |
610 | 1 | _aтруды учёных ТПУ | |
610 | 1 | _acomputer vision | |
610 | 1 | _adeep learning | |
610 | 1 | _amachine learning | |
610 | 1 | _apattern recognition | |
610 | 1 | _amobile robotics | |
610 | 1 | _aobject tracking | |
610 | 1 | _avideo analysis | |
610 | 1 | _aкомпьютерное зрение | |
610 | 1 | _aмашинное обучение | |
610 | 1 | _aраспознавание | |
610 | 1 | _aраспознавание образов | |
610 | 1 | _aмобильная робототехника | |
610 | 1 | _aотслеживание | |
610 | 1 | _aвидеоанализ | |
701 | 1 |
_aZarnitsyn _bA. Yu. _cspecialist in the field of informatics and computer technology _cAssistant of the Department of Tomsk Polytechnic University _f1990- _gAleksander Yurievich _2stltpush _3(RuTPU)RU\TPU\pers\46039 |
|
701 | 1 |
_aVolkov _bA. S. _gArtem Sergeevich |
|
701 | 1 |
_aVoytsekhovskiy _bA. A. _gAleksey Alekseevich |
|
701 | 1 |
_aPyakullya _bB. I. _cspecialist in the field of informatics and computer technology _cdesign engineer of Tomsk Polytechnic University _f1990- _gBoris Ivanovich _2stltpush _3(RuTPU)RU\TPU\pers\34170 |
|
712 | 0 | 2 |
_aНациональный исследовательский Томский политехнический университет _bИнженерная школа информационных технологий и робототехники _bОтделение автоматизации и робототехники _h7952 _2stltpush _3(RuTPU)RU\TPU\col\23553 |
801 | 2 |
_aRU _b63413507 _c20200214 _gRCR |
|
856 | 4 | _uhttp://dx.doi.org/10.4108/eai.22-1-2019.156385 | |
942 | _cCF |