kjuulh
d0ea8019e1
this is mainly to decouple the actual sending of events, from the ingest. we now ingest the data, and update consumer groups with the new offset. Consumer groups now in the background continously send out data from the update. they tick 1 second between checks, but if something takes long than a second, the next run just continues from where we left off Signed-off-by: kjuulh <contact@kjuulh.io>
38 lines
953 B
Markdown
38 lines
953 B
Markdown
# nodata
|
|
|
|
Nodata is a simple binary that consists of two parts:
|
|
|
|
1. Data ingest
|
|
2. Data storage
|
|
3. Data aggregation
|
|
4. Data API / egress
|
|
|
|
## Data ingest
|
|
|
|
Nodata presents a simple protobuf grpc api for ingesting either single events or batch
|
|
|
|
## Data storage
|
|
|
|
Nodata stores data locally in a parquet partitioned scheme
|
|
|
|
## Data aggregation
|
|
|
|
Nodata accepts wasm routines for running aggregations over data to be processed
|
|
|
|
## Data Egress
|
|
|
|
Nodata exposes aggregations as apis, or events to be sent as grpc streamed apis to a service.
|
|
|
|
# Architecture
|
|
|
|
## Data flow
|
|
|
|
Data enteres nodata
|
|
|
|
1. Application uses SDK to publish data
|
|
2. Data is sent over grpc using, a topic, id and data
|
|
3. Data is sent to a topic
|
|
4. A broadcast is sent that said topic was updated with a given offset
|
|
5. A client can consume from said topic, given a topic and id
|
|
6. A queue is running consuming each broadcast message, assigning jobs for each consumer group to delegate messages
|