There has been a long debate over the best system architectures for supporting new machine learning workflows. The incumbent distributed batch style HPC systems are appealing as, oftentimes, scientists already have access to these resources. However, Kubernetes clusters have been grabbing mindshare with data centric compute workloads with projects like KubeFlow and Open Data Hub. There are many reasons for this, but chief among them is the necessity for data scientists to push their models into a production application. To best illustrate the challenges of running code in production, we will analyze one of the more operationally challenging environments, the edge. Red Hat, HPE, and NVIDIA have partnered together to create KubeFrame that solves these production issues for data scientists and allows scientists to stay focused on the science.