AMD and ONNX release Open-Source Toolchain, TurnkeyML, for agile model development and deployment.

Key players insights
AMD and ONNX release Open-Source Toolchain, TurnkeyML, for agile model development and deployment.
Lately, it seems like you can’t open a news or social media app without an announcement of an innovative model being launched, a new software (SW) stack in development, or yet another hardware (HW) accelerator being released. Given this accelerated environment, it’s challenging as a developer, to reason about the marketing claims and decide which solution works best for the problem space. Evaluating a machine learning (ML) model on a single SW stack and single HW accelerator that you’re familiar with is a relatively straightforward problem. Evaluating the ever-increasing number of models, on multiple software stacks across a wide range of HW accelerators that are available today is an exponentially more complicated problem. Additionally, frameworks like ONNX, which aim to create a level playing field by developing a vendor agnostic representation of machine learning models are crucial in evaluating these state-of-the-art (SOTA) models on the different hardware accelerators, but it’s even more challenging to keep these repositories of models up to date to stay relevant with the large number of technological advancements.

Introducing TurnkeyML

TurnkeyML is a tools framework that integrates models, toolchains, and hardware backends to make evaluation and actuation of this landscape “as simple as turning a key”. The goal with TurnkeyML is to provide a way to actuate on the many combinations and provide a way to reason about these 3 axes, given that they are constantly evolving and changing.
So how does TurnkeyML do this? The framework streamlines the process of ingesting any open-source PyTorch model, optimizing it, and executing it across a diverse set of hardware targets. And it does all this in a way that's completely transparent to the user. Additionally, it’s extensible, meaning that you can easily adapt the framework for your use case and needs, whether that be adding a new model, a different export or optimization tool or an additional hardware accelerator. Along with the TurnkeyML framework, there is a diverse corpus of ready-made models that a user can grab off the shelf and hit the ground running.

Getting started is as simple as:
pip install turnkeyml
turnkey my_model.py

Use Cases:
At a very high level, let’s look at some potential use cases where TurnkeyML can really empower someone.

· ONNX Model Zoo: With so many new models or variations on models being published at an unprecedented rate, it’s challenging to keep up with what is state-of-the-art. On top of that, you’re looking for a way to compare against different opsets and data types. With TurnkeyML, there’s an easy way to export thousands of ONNX files across different opsets and data types. Check out the newly refreshed ONNX Model Zoo at [link].

· Performance Validation: Now that you have this corpus of models, we want to see what the performance is on a variety of hardware devices and runtimes. Perhaps this is identifying which models work best on a desired hardware or perhaps it’s what hardware should I select for my model/use-case. With TurnkeyML, you can easily evaluate product-market fit.

· Functional Coverage: This could look a few different ways when measuring the functional coverage of a toolchain/hardware combinations over a large corpus of models. I’m developing a new compiler for my custom hardware accelerator, and I’d like to understand how many models are supported. I’ve made optimizations to my software stack, does that impact my model coverage? Of the models I say my hardware supports, does that provide enough functional support for what the industry model trends desire?

· Stress testing: I’d like to run back-to-back inferences across thousands of models and log the results. This will help me find bugs in my software stack and stress my hardware, providing valuable insight on product readiness.

· Model insights: I’ve been developing a model, or I’ve recently decided to investigate a newly published model. Using TurnkeyML, I can analyze the model and learn it’s parameter count, input shapes, which OpSet it uses and more.

Conclusion:
Recognizing the diverse set of challenges that are introduced with the rapid release of SOTA models, the ONNX ecosystem has moved to an automated approach for the continual integration and deployment of cutting-edge model architectures on a variety of hardware. TurnkeyML streamlines the evaluation process by integrating models, toolchains, and hardware backends. This user-friendly framework simplifies the ingestion, optimization, and execution of PyTorch models across various hardware targets, providing transparency and adaptability.


Visit the GitHub repository https://github.com/onnx/turnkeyml for an in-depth look at TurnkeyML, complete with detailed instructions and user guides, the full framework, and access to ONNX-ready open-source models at https://github.com/onnx/models. Your technical insights, contributions, and ideas are highly valued. Feel free to connect with our development team via amd_ai_mkt@amd.com for any technical inquiries.

JOIN THE GLOBAL AI INDUSTRY IN CANNES AT THE WAICF24