Computer Vision in business
ANO "Digital Economy"
Machine learning - academic talks
Dubai, Almaty, Yerevan, Tbilisi, London, Singapore - Russian experience.
Hebbian learning for Convolutional Neural Networks: Overview
Continuous Deep Q-Learning in Optimal Control Problems: Normalized Advantage Functions Analysis
Yandex, IMM UB RAS
Kartashev Oleg, Severstal Digital
Metric Learning, Anomaly Detection and Synthetic Data for preventing chain conveyor outages
Mining industry cases on the application of machine learning and computer vision: a business perspective
The implementation of artificial intelligence (AI) and machine learning in the mining industry provide many economic benefits for the mining industry through cost reduction, efficiency, and improving productivity, reducing exposure of workers to hazardous conditions, continuous production, and improved safety. However, the implementation of these technologies has faced economic, financial, technological, workforce, and social challenges. This report discusses the current status of AI, machine learning implementation in the mining industry and highlights potential areas of future application. The report also presents some cases of implementation these technologies and what are some of the steps needed for successful implementation of these technologies in this sector.
Chain coneveyor monitoring is one of technically complex CV tasks solved at Severstal. We will describe what challenges did we have, how we dealt with lack of data, what ML pipeline did we create and how it is deployed and works on 39 cameras throughout 3 factory shops.
At the session, we will share our experience of moving to different countries - the cost of living in the country, working conditions, visas, the market, what kind of support you can get, etc. Real case studies first hand
Victor Lempitsky, Yerevan
Arkady Sandler, Spain, Israel
One of the most effective continuous deep reinforcement learning algorithms is normalized advantage functions (NAF). The main idea of NAF consists in the approximation of the Q-function by functions quadratic with respect to the action variable. This idea allows to apply the algorithm to continuous reinforcement learning problems, but on the other hand, it brings up the question of classes of problems in which this approximation is acceptable. The presented paper describes one such class. We consider reinforcement learning problems obtained by the time-discretization of certain optimal control problems. Based on the idea of NAF, we present a new family of quadratic functions and prove its suitable approximation properties. Taking these properties into account, we provide several ways to improve NAF. The experimental results confirm the efficiency of our improvements.
Training acceleration is one of the prominent research directions in the field of deep learning. Among other directions in this field, Hebbian learning is considered to be a highly prospective approach. Although Hebbian learning does not produce models of accuracy comparable to training with a traditional backpropagation approach, there is an emerging trend of applying Hebbian learning as a part of mixed training strategies that might include various backpropagation methods. Also, Hebbian learning is plausible for neuromorphic hardware due to its locality and highly parallel nature. In this paper, we overview existing approaches of applying Hebbian learning to training one of the largest and most demanded classes of deep neural networks - Convolutional Neural Networks. We analyze the availability of existing software solutions for Hebbian learning. More importantly, we investigate various approaches to the implementation of Hebbian learning to convolutional and linear layers as they are foundational for modern deep neural networks. This paper will be interesting both for researchers who want to accelerate training and for engineering practitioners who might be interested in exploring new ways of training Convolutional Neural Networks on new types of hardware.
Continual Learning or overcoming catastrophic forgetting in neural networks
Разработан программный продукт для прогнозирования потребления электроэнергии на каждый час следующих суток. На основе метода машинного обучения Huber regressor разработана новая, полезная и качественная математическая модель, связывающая потребление электроэнергии с выявленными факторами. Регрессионная модель позволяет получать прогнозные оценки на каждый час следующих суток с ошибкой 3,03% на тестовой выборке данных и прогнозировать на каждый до трех суток c относительной ошибкой в 4,82%.
Software product for predicting electricity consumption for every hour of the day.
Stanislav Karatsev, SKGMI (STU)
Review of cases of application of machine learning models in development problems
Несмотря на высокий объем работ и высокую долю ВВП, производительность в строительной отрасли росла медленнее, чем в других сферах (в среднем, 1% ежегодно за последние 20 лет). За счет цифровизации, Самолету уже удалось увеличить производительность на 60% и впереди еще много работы в этом направлении
Neural networks trained using the backprop are prone to catastrophic forgetting. If we first teach the network to recognize cats and then start teaching it to recognize dogs, then it will forget some amount of information about cats. This problem is especially evident when new data, that needs to be learned, appears continuously during the work of the neural network. This sub-area of machine learning is called Continual Learning. There is a wide variety of approaches to this problem, ranging from the simplest ones, such as remembering all previous data, to sophisticated weight updates that reduce the forgetting of learned knowledge. We will talk about these and other methods in detail in this report.