Learn why nothing beats testing a product and running your own evaluation yourself.
You may wonder how someone from EXASOL can make such a bold statement. Especially when we continue to publish official benchmarks numbers, notably those from the Transaction Processing Performance Council where we show strong leadership in all categories ranging from 100 GB up to 100 TB of data. And as you’ll know, the TPC-H benchmark is a tough and neutral yardstick for measurement; it’s a good indicator to compare the different database solutions out there in the market when it comes to evaluating performance, scalability and costs. And, it goes without saying that we’re delighted to dwarf the competition through the unrivalled scalability and performance of our in-memory, massively parallel, analytic database: EXASolution 5.0.
However, we have to realise that while the TPC-H benchmark tests show off EXASOL in a great way, it is still only one benchmark reflecting a certain way of using an analytic database. Today, the range of data applications out there is much broader when compared with the situation just five years ago. That’s why we always encourage our prospects to test any solution themselves on the basis of their own data volumes and their own needs and requirements, which include the amount of concurrent query workloads, loading processes and the set of analytic queries they need to run. The more they dive into the products’ capabilities, the better and more informed their final choice of database will be. In other words, we let the customer run their own benchmark.
And with the current trend of adopting best-of-breed solutions instead of depending on just one vendor, there has never been a more important reason why products should be put thoroughly through their paces. Every week, new technologies and products pop up and it can be difficult to understand their advantages and disadvantages without fully testing them first. Marketing messages may sound promising, but we have become jaded as many have resulted in nothing more than bold hype and unclear claims.
Another important reason why you should run your own benchmark is that you’ll soon discover whether the vendor is the right fit for your organisation. The technology is only one half of the equation; you also need to understand how the software vendor will support you, how much time and effort they will invest in you and your success and how responsive they will be when dealing with technical issues. This is especially important when it comes to implementing the solution, not just in the evaluation phase.
Finally, don’t forget to talk to existing vendor references, especially those from your own vertical market who have gone through a similar decision process to you. They’ll be able to give you valuable insights into real-world integration, implementation and daily usage of the technology in question. The time you invest in discussions upfront may just save you a lot of hassle and money later on in the process.
In conclusion, it’s incredibly important for there to be industry-standard benchmarks out there that companies can consult, but nothing beats testing a product and running your own evaluation yourself.
The customer is king. And the customer is the ultimate benchmark.