Training a digit recognizer using PyTorch, and inferencing on CPU with ONNX Runtime

3 min read

In any machine learning problem, the goal of our neural network is to do well on the new unseen data, training a deep learning model helps to achieve this goal. We have to focus on running an inference session and ensuring the model works perfectly when deployed in a specific environment. You can train your model anywhere in your preferred framework but the environment where you need to run the inference session need not be favorable to that particular framework. For example, some time ago, applications preferred the Caffe model for deployment. Does this mean we have to use the…...

This article is free to read

Login to read the full article


OR
Nagaraj S Murthy I'm Nagaraj from India. I graduated in 2018 with a major in electronics and telecommunications. I love to work on tasks related to deep learning, computer vision, and robotics. The concept of implementing AI on edge devices and reinforcement learning fascinates me a lot. I like to write about my learnings which are usually in-depth, the projects that I have done, and more.

Follow DDI

Gain Access to Expert Views

We won't send you spam. Unsubscribe at any time.