Pafka 0.3.0 Release: Low-cost Solution to Peak Traffic Scenario in Kafka

#Download and start docker
docker run -it 4pdopensource/pafka-dev bash
# start zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties > zk.log 2>&1 &
# start pafka server
bin/kafka-server-start.sh config/server.properties > pafka.log 2>&1 &
# test producer performance
bin/kafka-producer-perf-test.sh --topic test --throughput 1000000 --num-records 1000000 --record-size 1024 --producer.config config/producer.properties --producer-props bootstrap.servers=localhost:9092
# test consumer performance
bin/kafka-consumer-perf-test.sh --topic test --consumer.config config/consumer.properties --bootstrap-server localhost:9092 --messages 1000000 --show-detailed-stats --reporting-interval 1000 --timeout 100000
# 100 Gbps network configuration
listeners=PLAINTEXT://172.29.100.24:9092
# Thread related
num.network.threads=32
num.io.threads=16
# Tiered storage related configuration
# log file channel type; Options: "file", "pmem", "tiered".
# if "file": use normal file as vanilla Kafka does. Following configs are not applicable.
log.channel.type=tiered
# the storage types for each layers (separated by ,)
storage.tiers.types=NVME,HDD
# first-layer storage paths (separated by ,)
storage.tiers.first.paths=/nvme
# first-layer storage capacities in bytes (separated by ,); -1 means use all the space
storage.tiers.first.sizes=700000000000
# second-layer storage paths (separated by ,)
storage.tiers.second.paths=/hdd
# threshold to control when to start the migration; -1 means no migration.
storage.migrate.threshold=0.1
# migration threads
storage.migrate.threads=8
batch.size=163840
# Start the producer test process and write the log to producer.log
python3 bin/bench.py --brokers 172.29.100.24:9092 --threads 16 --hosts "$TEST_NODE" --num_records 2000000000 --record_size 1024 --type producer --use_dynamic --dynamic 100000:500000:2000000 --sleept 360 --only_min_max --wait_for_all > producer.log 2>&1 &
# Start the consumer test process and write the log to the consumer.log
python3 bin/bench.py --brokers 172.29.100.24:9092 --threads 16 --hosts "$TEST_NODE" --num_records 2000000000 --type consumer --wait_for_all > consumer.log 2>&1 &

--

--

--

memark.io — Leveraging Modern Storage Architecture for System Enhancement

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Deploying MongoDB on MacOS and getting started with it

Stop Writing Code!

CCSWAP Will Integrate Berry Oracle for Secure Data Feeds

Incentivised Multi-Target, Multi-Camera Tracking with Untrusted Cameras (Part 2)

Let’s Build a Movie API With Separated Layered Architecture Using Go

What happens when you type ‘ls *.c’ and hit enter in your shell?

Concurrency and parallel computing

Effective Java: Document All Exceptions Thrown By Each Method

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
MemArk

MemArk

memark.io — Leveraging Modern Storage Architecture for System Enhancement

More from Medium

Apache Pulsar Performance Testing with NoSQLBench

Koo’s data platform — part 1: Apache Kafka and NiFi

Apache DolphinScheduler 2.0.2 Release Announcement: WorkflowAsCode Is Launched!

How to solve the issue of querying Kafka Streaming Data? Writing а KSQL Query