Off-topic rant: I hate when websites hide the scrollbar. By all means, apply minimal styling to make it cohesive with the website background and foreground. But don't completely hide it.
See it's the exact opposite for me, although my experience is mostly a) building giant cubes in giant enterprise orgs with hourly data volumes you couldn't fit in memory, and b) 10-15 years old (so the hardware sucked and we didn't have duckDB). But yeah, I don't think the O in OLAP standing for 'online' ever really made sense.
I'm curious to know how much of this article is OLAP specific vs just generic good practice for tuning batch insert chunk size. The whole "batch your writes, use 100k rows or 1s worth of data" thing applies equally to pretty much any database, they're just ignoring the availability of builtin bulkload methods so they can arguing that INSERTs are slow so they can fix it by adding Kafka, for reasons? Maybe I'm missing something.
It's a tradeoff. Analytics databases are often filled with periodic dumps of transactional databases and this feels so dirty that it's easy to accidentally forget that it isn't just a hack, it's actually a structural workaround for the poor random-write performance of analytics DBs:
OLTP = more read amplification on analytics workflows, less write amplification of random insert
OLAP = less read amplification on analytics workflows, more write amplification of random insert
If that's too theoretical, the other day I saw 1-row updates of about 10kb data lead to 1GB of writes in Redshift: 1MB block size times 300 columns times a log+shuffle factor of about 3. That's a write amplification factor of 100000. Crazy stuff.
Why add RedPanda/Kafka over using async insert? https://clickhouse.com/docs/optimize/asynchronous-inserts
It’s recommended in the docs over the Buffer table, and is pretty much invisible to the end user.
At ClickHouse Inc itself, this scaled far beyond millions of rows per second: https://clickhouse.com/blog/building-a-logging-platform-with...
The biggest reason is that you may also have other consumers than just Clickhouse.
Off-topic rant: I hate when websites hide the scrollbar. By all means, apply minimal styling to make it cohesive with the website background and foreground. But don't completely hide it.
This is included on that page's stylesheet:
Another reason to use Firefox, it doesn't respect that CSS :)
Weird, I always think real time when I think OLAP, but I guess that’s in the “consumption reactivity side” not the “batch inserts are good” side
See it's the exact opposite for me, although my experience is mostly a) building giant cubes in giant enterprise orgs with hourly data volumes you couldn't fit in memory, and b) 10-15 years old (so the hardware sucked and we didn't have duckDB). But yeah, I don't think the O in OLAP standing for 'online' ever really made sense.
I'm curious to know how much of this article is OLAP specific vs just generic good practice for tuning batch insert chunk size. The whole "batch your writes, use 100k rows or 1s worth of data" thing applies equally to pretty much any database, they're just ignoring the availability of builtin bulkload methods so they can arguing that INSERTs are slow so they can fix it by adding Kafka, for reasons? Maybe I'm missing something.
Well yeah that's the sales pitch :)
It's a tradeoff. Analytics databases are often filled with periodic dumps of transactional databases and this feels so dirty that it's easy to accidentally forget that it isn't just a hack, it's actually a structural workaround for the poor random-write performance of analytics DBs:
OLTP = more read amplification on analytics workflows, less write amplification of random insert
OLAP = less read amplification on analytics workflows, more write amplification of random insert
If that's too theoretical, the other day I saw 1-row updates of about 10kb data lead to 1GB of writes in Redshift: 1MB block size times 300 columns times a log+shuffle factor of about 3. That's a write amplification factor of 100000. Crazy stuff.