The much-hyped Oracle 12c In-Memory was finally announced by Larry Ellison yesterday. What do we know about it now?
Following on from my previous blog (which actually turned out to be pretty accurate):
Things we learned from the Oracle 12c In-Memory launch
- Their in-memory option combines row and column stores and caches (like HP Vertica since 2005)
- If you want to put a data object into memory, you need to say so (like Kognitio since 1988)
- Transaction processing can be accelerated as well as analytics – just drop all your analytical indexes and use in-memory instead (like SAP Hana since 2010)
- Joins are replaced by full-tables scans for analytical queries (like IBM Netezza since 2000)
- Everything runs faster in-memory (like Exasolution since 2006)
Things we were told that are obviously untrue
Perhaps Mr. Ellison mis-spoke and an apology is in the mail but …
- Oracle is NOT the only in-memory system that can keep running after losing a node (Exasol can)
- Oracle is NOT the only in-memory system that keeps running when it runs out of memory (Exasol can)
- Oracle did NOT set a world record with their tennis example – you can’t just run any query that suits you and say “Hey, that’s a world record right there” (Exasol actually does hold a recognised, audited world record for analytic processing)
Things we didn’t learn
- How much does it cost?
- How does it compare to something that isn’t another version of Oracle?
It’s a mutant combination of a bunch of technologies that have been around for years.
Despite being old news, the Marketing describes it as being groundbreaking.
Despite being a very complex solution, the Marketing describes it as “so easy it’s actually boring”.
Incredible what you can do with Marketing words.
It is not yet ready for general availability, but when it is (next month?), I’m so looking forward to seeing whether it can keep up with Exasol in a real world test.