It’s been a busy two years. I acknowledge that I have neglected this blog, but a lot has happened since my last post. Particularly, having kids and working on multiple projects has kept me busy. I have grown as a software engineer. From theory to practice, I have learned quite a bit about cloud architecture and its intersection with machine learning.
NVIDIA is fortunate. I don’t know many who would have guessed that a graphics card manufacturer would become the leading producer of machine learning hardware. Regardless, my love of video games and computers has paid off. The future of AI depends on the hardware that powered my hobbies of the past. Working with NVIDIA tooling, I can’t help but appreciate some of the tools and components that are deployed to service workloads. Whenever I need to impress someone about the scale of machine learning hardware, I tell them about the DGX systems. Most people are used to having copper ethernet connections for their routers that deliver Wi-Fi. Some enthusiasts will hardwire connections to their computers, but seldom do I hear about people excited to talk about their fiber optic connections. Starting here, I can help a layman understand the power of these machines. They use 10 fiber optic InfiniBand network interface cards. It’s absolutely staggering the amount of hardware being utilized at the base layer—not to mention when aggregated as part of the larger super-pod systems.
I have worked extensively to deliver solutions like NVIDIA Morpheus and multi-GPU architecture. It seems like we’re in the AI gold rush. Hopefully, more is to come!