Come and enjoy a full day event, 9am-9pm, with professional and fun atmosphere, just clear your schedule:-)
We are welcoming you to Boost your Microservices & Data pipelines with Kafka Streams with the fullstack community.
* Processing large chunks of data with Kafka by SeaLights Team
At Sealights we're handling large amounts of customer data, originating from their CI. Our infrastructure is currently built on top of SQS, which suffers from many limitations like message ordering, intelligent scaling, opaque visibility. Most importantly, we find it challenging to set and meet a Service Level Object and Agreement, which is crucial for our customers.
We're considering Kafka Streams as a new solution that will enable us to meet our growing data processing needs, and optimize our development and support velocity. Also, Merge streams may allow us to simplify our processing pipeline.
* Processing performance counter measurements by CellWize Team
As part of our system we are processing performance counters for cellular cells. It will be nice to use Kafka streams capabilities to enrich these measurements with aggregations, KPI calculation etc.
Our system will feed a Kafka topic with the parsing results.
Stream processor will aggregate counters and calculate KPI continuously.
* Build a solution that identifies colors for color-blind people by Cyren Team
We want to build a solution that identifies colors for color-blind people. The dataset will be images of colors of a certain pixel size and we will know the color.
* Metering collection online processing by ECI Team1
processing of periodically collected monitoring information from deployed VNF applications and physical hosts monitoring in order to receive better accuracy and reduced complexity and storage. For example: CPU information retrieved over time, instead of collecting raw data and process it from DB saved samples, we can process CPU utilization for several samples for a certain timeframe and save the analyzed value.
* Alarm lifecycle processing by ECI Team2
Our current alarm mechanism is based on Elasticsearch queries for saved monitoring values. Processing stream of events during monitoring data collation, and correlation of data between several different sources (different Kafka topics), will help us get (almost) real time alarms, as data will get analyzed during the collection (let’s say a specific value that crossing a threshold is present for 5 minutes window – we’ll raise alarm immediately, instead of scheduling a query to Elasticsearch)