David is a Senior AI/ML Engineer within the Office of the CTO at NetApp, where he’s dedicated to empowering developers to build, scale, and deploy AI/ML solutions in production environments. He brings deep expertise in building and training models for applications such as NLP, vision, real-time analytics, and even classifying debilitating diseases. His mission is to help users build, train, and deploy AI models efficiently, making advanced machine learning accessible to users of all levels.
Before NetApp, he was heavily involved in the AI/ML community, specifically in conversational AI solutions and driving AI platform growth in a DevRel and pre-sales role. David frequently shares his insights at industry conferences and events, offering hands-on guidance for implementing AI/ML in cloud environments. David's prior experience includes contributing to the Kubernetes and CNCF ecosystems, working hands-on with VMware virtualization, implementing backup/recovery solutions, and developing hardware storage adapter firmware and drivers.
The AI industry is shifting from bigger to better. As companies chase efficiency and performance, quantization has emerged as one of the most effective ways to make models smaller, faster, and more affordable—without crippling accuracy. With recent breakthroughs from teams like DeepSeek proving that optimization can shake entire markets, developers are rethinking what "efficient AI" really means. The real question isn't whether we can make models smarter... it's whether we can make them smarter per watt, per dollar, and per millisecond.
This session explores the full lifecycle of model quantization and how it powers the rise of Small Language Models (SLMs) and agentic AI systems. We'll cover how quantization works, when it pays off, and how it changes deployment tradeoffs across CPUs, GPUs, and AI accelerators. Attendees will walk away with practical techniques for compressing models, tuning quantization-aware training, and deploying specialized SLMs to leverage them in multi-agent Agentic systems using Agent2Agent (A2A) protocol. The end goal is to maximize hardware potential while staying responsive without breaking the bank on hardware costs.
Every keyboard has a sound signature. Every click and clack carries information. With deep learning and a decent microphone, that information can be weaponized. In this session, we'll explore how modern AI models can identify what you're typing just from the sound of your keyboard. Using a dataset of recorded keystrokes and an open source sound classification pipeline, we'll walk through building a model that can recover text with startling accuracy. You'll see firsthand how a few lines of Python and a trained network can turn your laptop into an acoustic fingerprint.
But this talk isn't about enabling surveillance... it's about understanding it to fight back. We'll unpack why uniform keyboard layouts and consistent typing styles make these attacks so effective, then explore real countermeasures: signal masking, password entropy, and environmental noise defenses. You'll leave with a practical understanding of how these attacks work, how to reproduce them for research or awareness, and how to harden your systems (and yourself) against them.
Searching for speaker images...
