Wouldn’t it be nice if while searching on the Internet for the latest performance results of our beloved frameworks, we could trust whatever is written and call it a day?
In this session we’ll examine a few different poorly written benchmarks, dissecting and dismantling them down to the last bit, while also demonstrating a psychological trick that is used to distract the user's focus from what’s actually important.
We will show you how to uncover the lies, damn lies behind articles and benchmark results, with skeptical pragmatism and methodologies borrowed from the performance engineering field and philosophy..
You should come away from this session with a better understanding of the tools to reveal the weak points of a benchmark or, conversely, to convince users of some new benchmarking lie 👿
In this session we’ll examine a few different poorly written benchmarks, dissecting and dismantling them down to the last bit, while also demonstrating a psychological trick that is used to distract the user's focus from what’s actually important.
We will show you how to uncover the lies, damn lies behind articles and benchmark results, with skeptical pragmatism and methodologies borrowed from the performance engineering field and philosophy..
You should come away from this session with a better understanding of the tools to reveal the weak points of a benchmark or, conversely, to convince users of some new benchmarking lie 👿
Georgios Andrianakis
Red Hat
Georgios works for Red Hat as a Senior Principal Software Engineer and is currently the most active contributor for Quarkus, where he works in all sorts of areas, including but not limited to LangChain4j, RESTEasy Reactive, Spring compatibility, Kubernetes support, testing, Kotlin and more.
He is also an enthusiastic promoter of Quarkus that never misses a chance to spread the Quarkus love!
He is also an enthusiastic promoter of Quarkus that never misses a chance to spread the Quarkus love!