Hacker News new | past | comments | ask | show | jobs | submit login
Optimizing ClickHouse: Tactics that worked for us (highlight.io)
131 points by podoman 26 days ago | hide | past | favorite | 25 comments



We've been using highlight for our bootstrapped member management platform[1] and I gotta say I'm super impressed with the session replay feature, it's really helpful for understanding user behavior at a fraction of the price of competitors.

I remember wanting to use Heap's session replay only to release they want hundreds of dollars per _month_, my last bill with highlight was $2.38 I recall.

That's all to say that I'm glad Highlight is figuring out how to scale while still offering their features to the small players of the world.

[1] https://embolt.app


Very nice and very inspirational for someone bootstrapping a startup.

The pages clearly defined what you are building and how to use it.

The explanation of platform fees makes sense, though it could more clear if the pricing examples are only based on those fees or are account limits to number of members or dues.

You might want to check your terms of service, they do not list a jurisdiction and have the placeholder [jurisdiction] instead.

Best of luck with it!


Highlight.io cofounder here. Thanks for the shout out. Glad to hear you like the product; continue to share feedback as you use it!


Highlight looks great. Does it support Elixir LiveView?


We do via the OpenTelemetry Elixir SDK (and the Phoenix extension) https://opentelemetry.io/docs/languages/erlang/


clickhouse-local is pretty slick as well. You can operate directly on text files as if they were tables. I made my own toy text file database thing and thought I was cool because I could outrun similar programs like q, textql, sqlite, etc. But clickhouse-local had me by a factor of 10 easy in every kind of query with every type of data. Those guys know stuff.


Have heard good things about local. Clickhouse more broadly is an awesome project and they're always on standby to help our eng team team as well.


We found the "lots of small inserts" issue, and fixed it by just using the Buffered table engine[0]. Can create it as a replica of the destination table, and it stores inserts in memory until they cross a threshold and are written. Super simple and took 5 minutes.

[0] https://clickhouse.com/docs/en/engines/table-engines/special...


Why not just use async inserts that internally do the same thing?


Once we hit >100k inserts per second, async inserts didn't work well for us because we had limited control over the background async insert batching happening on the cluster. The background inserts would be too small, resulting in many merges running, causing high CPU, causing high back-pressure (latency) on async inserts which would just result in an ingestion delay.

Plus, async inserts are only available on ClickHouse Cloud.


Asynchronous INSERTs are available in the open-source ClickHouse version and in ClickHouse Cloud. The feature is independent of the Cloud.

To enable asynchronous INSERTs, set `async_insert` to true.

Here is the documentation: https://clickhouse.com/docs/en/optimize/asynchronous-inserts ; intro: https://clickhouse.com/blog/asynchronous-data-inserts-in-cli... and a hands-on guide: https://clickhouse.com/blog/monitoring-asynchronous-data-ins...


Like sibling comment is saying - async inserts are part of the oss version. Batch flushes are tunable in two dimensions - size and latency. I think it’s likely the default size was just too low for your usecase


Forget the exact details, but we ran into issues with tuning the two setting you're describing on the Cloud instance (these settings were user level settings that seemed to get reset by ClickHouse Cloud because they were updated based on the cluster size). Perhaps this could change with Cloud and isn't an issue with the OSS version.


What sort of insert volume are you handling with that?


> We opted to use the ClickHouse Kafka Connect Sink that implements batched writes and exactly-once semantics achieved through ClickHouse Keeper.

Just a heads up, You've got repeated line there


Might have been a joke on "exactly once" ;)


I see what you did there.



since you're doing typos, this one also

> OTLP


Very interesting observations! Merge performance tuning seems often overlooked even though it's a key aspect of sustained ClickHouse performance.

I also like that the blog is quite compact and gets the points across without getting too much into the weeds.

One thing I've noticed also that bloom filter index types can be quite costly to merge. In many cases that's acceptable though due to the massive benefit they provide for text queries. One just has to be mindful of the overhead when adding them.


Exploring bloom filter index merges would be an interesting addition. I do wish it were easier to profile merge performance to break down where most of the CPU time is being spent.


Thanks for sharing! I'm curious as to your approach to changing the ORDER BY key for such large tables without significant downtime, since AFAIK this can't be done in place (see: https://kb.altinity.com/altinity-kb-schema-design/change-ord...). Are you able to share any details?


Materialized views can help change the ORDER BY with 0 downtime:

* Create a new version of the table with the new ORDER BY.

* Create a materialized view that will insert from the old table to the new table.

* Update your application code to query the new table.

* Start inserting data into the new table.


What size is the cluster? Just curious how much hardware is needed to handle such traffic :)


3 nodes of 120 GiB and 30 vCPU each.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: