kjuulh
d0ea8019e1
this is mainly to decouple the actual sending of events, from the ingest. we now ingest the data, and update consumer groups with the new offset. Consumer groups now in the background continously send out data from the update. they tick 1 second between checks, but if something takes long than a second, the next run just continues from where we left off Signed-off-by: kjuulh <contact@kjuulh.io> |
||
---|---|---|
crates/nodata | ||
proto/nodata/v1 | ||
templates | ||
.drone.yml | ||
.env | ||
.gitignore | ||
buf.gen.yaml | ||
buf.yaml | ||
Cargo.lock | ||
Cargo.toml | ||
cuddle.yaml | ||
README.md | ||
renovate.json |
nodata
Nodata is a simple binary that consists of two parts:
- Data ingest
- Data storage
- Data aggregation
- Data API / egress
Data ingest
Nodata presents a simple protobuf grpc api for ingesting either single events or batch
Data storage
Nodata stores data locally in a parquet partitioned scheme
Data aggregation
Nodata accepts wasm routines for running aggregations over data to be processed
Data Egress
Nodata exposes aggregations as apis, or events to be sent as grpc streamed apis to a service.
Architecture
Data flow
Data enteres nodata
- Application uses SDK to publish data
- Data is sent over grpc using, a topic, id and data
- Data is sent to a topic
- A broadcast is sent that said topic was updated with a given offset
- A client can consume from said topic, given a topic and id
- A queue is running consuming each broadcast message, assigning jobs for each consumer group to delegate messages