feat: add broker setup

this is mainly to decouple the actual sending of events, from the ingest.

we now ingest the data, and update consumer groups with the new offset. Consumer groups now in the background continously send out data from the update. they tick 1 second between checks, but if something takes long than a second, the next run just continues from where we left off

Signed-off-by: kjuulh <contact@kjuulh.io>
This commit is contained in:
2024-08-16 00:25:43 +02:00
parent f818a18d65
commit d0ea8019e1
11 changed files with 391 additions and 107 deletions

View File

@@ -22,3 +22,16 @@ Nodata accepts wasm routines for running aggregations over data to be processed
## Data Egress
Nodata exposes aggregations as apis, or events to be sent as grpc streamed apis to a service.
# Architecture
## Data flow
Data enteres nodata
1. Application uses SDK to publish data
2. Data is sent over grpc using, a topic, id and data
3. Data is sent to a topic
4. A broadcast is sent that said topic was updated with a given offset
5. A client can consume from said topic, given a topic and id
6. A queue is running consuming each broadcast message, assigning jobs for each consumer group to delegate messages