requirement.txtconfig.json (edit IP for MinIO and MongoDB)storage_config.json (edit IP for MinIO and MongoDB)requirement.txtconfig.jsonimage4edge.json (environment config for Docker, e.g., TensorFlow, PyTorch, or scikit-learn)requirement.txtconfig.jsonrequirement.txtconfig.jsonqueue2kafka.json (for training and real-time monitoring with Grafana)requirement.txtqot_eval
rabbitMQ-2-kafka for real-time visualizationrequirement.txt for all devices../start_dockers_4_service.sh # cd eadran
./start_all_service.sh # cd eadran
./start_orchestrator.sh # cd eadran
./start_fedserver.sh # cd eadran
./start_fededge.sh <file_config> # cd eadran
apps/water_leakapps/water_leak/water_leak_model_tf.pyapps/water_leak/data_reader.pypost_parser.add_argument('file', type=werkzeug.datastructures.FileStorage, required=True, location='files')
post_parser.add_argument('user', type=str, required=True, location='form')
post_parser.add_argument('key', type=str, location='form')
api.add_resource(StorageService, '/storage/obj', resource_class_kwargs=config) # POST parse -- recommend usage of Postman
ttu_water_leak_model_tf_v1, DP: ttu_water_leak_data_reader_v1api.add_resource(EADRANService, "/service/<string:op>", resource_class_args=(queue,),
resource_class_kwargs=config)
# Start Kafka and Spark
config/kafka_conf.json:
python kafka2influxdb.py # Push to InfluxDB
config/queue2kafka.json:
python rabbitmq2kakfka.py
eadran_cost_service/run_submit_spark./run_submit_spark.sh