Driver Pattern Recognition Using ML Quantization and Tabular Cloning Approach

Bookmark (0)
Please login to bookmark Close

It’s crucial in the transportation and automotive sectors for detecting speeding, aggressive, and distracted driving; improving safety; predicting dangers; and providing real-time feedback, thus enhancing transportation network efficiency and security. Machine learning, a subset of artificial intelligence, uses algorithms and statistical models to analyze large datasets, identifying patterns that may be difficult for humans to detect. Driver behavior detection employs machine learning techniques such as supervised, unsupervised, and reinforcement learning. In a related manner, quantization in machine learning has been known to improve the efficiency, speed, and performance of the trained model, making it beneficial for edge computing and IoT devices and enabling compact deployments and practical applications. This research focuses on the precomputation of a machine-learning algorithm for all input attributes, storing the response in a designated memory for efficient inference as an alternative to inferring a trained model with new input. The efficacy of this strategy depends on the quantization of input features, necessitating the selection of a certain number of bits within the existing memory capacity constraints. The author tested the strategy on two machine-learning models, K Nearest Neighbor (KNN) and Random Forest (RF), using quantization bits ranging from 2 to 8, resulting in varying accuracies compared to the original model’s outcomes. The RF model achieved 67% and 93% accuracy with 5 and 7 quantization bits, while the KNN model achieved 62% and 95% accuracy with 3 and 5 quantization bits. By quantifying input features and listing all possible input bit values along with their corresponding query output, this method can shorten the time it takes to draw conclusions, which makes it easier to use the machine-learning model on computers that are close to the edge or don’t have a lot of resources.It’s crucial in the transportation and automotive sectors for detecting speeding, aggressive, and distracted driving; improving safety; predicting dangers; and providing real-time feedback, thus enhancing transportation network efficiency and security. Machine learning, a subset of artificial intelligence, uses algorithms and statistical models to analyze large datasets, identifying patterns that may be difficult for humans to detect. Driver behavior detection employs machine learning techniques such as supervised, unsupervised, and reinforcement learning. In a related manner, quantization in machine learning has been known to improve the efficiency, speed, and performance of the trained model, making it beneficial for edge computing and IoT devices and enabling compact deployments and practical applications. This research focuses on the precomputation of a machine-learning algorithm for all input attributes, storing the response in a designated memory for efficient inference as an alternative to inferring a trained model with new input. The efficacy of this strategy depends on the quantization of input features, necessitating the selection of a certain number of bits within the existing memory capacity constraints. The author tested the strategy on two machine-learning models, K Nearest Neighbor (KNN) and Random Forest (RF), using quantization bits ranging from 2 to 8, resulting in varying accuracies compared to the original model’s outcomes. The RF model achieved 67% and 93% accuracy with 5 and 7 quantization bits, while the KNN model achieved 62% and 95% accuracy with 3 and 5 quantization bits. By quantifying input features and listing all possible input bit values along with their corresponding query output, this method can shorten the time it takes to draw conclusions, which makes it easier to use the machine-learning model on computers that are close to the edge or don’t have a lot of resources. Read More