- Prerequisites
- Step 1. Pull Klyff CE Images
- Step 2. Review the architecture page
- Step 3. Clone Klyff CE repository
- Step 4. Configure Klyff database
- Step 5. Choose Klyff queue service
- Step 6. Enable monitoring (optional)
- Step 7. Running
- Upgrading
- Generate certificate for HTTPS
- Next steps
This guide will help you to setup Klyff in cluster mode using Docker Compose tool.
Prerequisites
Klyff Microservices are running in dockerized environment. Before starting please make sure Docker Engine and Docker Compose are installed in your system.
Install Docker:
Step 1. Pull Klyff CE Images
Make sure your have logged in to docker hub using command line.
1
2
3
4
5
6
7
8
docker pull thingsboard/tb-node:3.8.1
docker pull thingsboard/tb-web-ui:3.8.1
docker pull thingsboard/tb-js-executor:3.8.1
docker pull thingsboard/tb-http-transport:3.8.1
docker pull thingsboard/tb-mqtt-transport:3.8.1
docker pull thingsboard/tb-coap-transport:3.8.1
docker pull thingsboard/tb-lwm2m-transport:3.8.1
docker pull thingsboard/tb-snmp-transport:3.8.1
Step 2. Review the architecture page
Starting Klyff v2.2, it is possible to install Klyff cluster using new microservices architecture and docker containers. See microservices architecture page for more details.
Step 3. Clone Klyff CE repository
1
2
git clone -b release-3.8 https://github.com/thingsboard/thingsboard.git --depth 1
cd thingsboard/docker
Step 4. Configure Klyff database
Before performing initial installation you can configure the type of database to be used with Klyff.
In order to set database type change the value of DATABASE
variable in .env
file:
1
nano .env
to one of the following:
postgres
- use PostgreSQL database;hybrid
- use PostgreSQL for entities database and Cassandra for time series database;
NOTE: According to the database type corresponding docker service will be deployed (see docker-compose.postgres.yml
, docker-compose.hybrid.yml
for details).
Step 5. Choose Klyff queue service
Klyff is able to use various messaging systems/brokers for storing the messages and communication between Klyff services. How to choose the right queue implementation?
In Memory queue implementation is not suitable for any sort of cluster deployments.
-
Kafka is recommended for production deployments and used by default. This queue is used on the most of Klyff production environments now. It is useful for both on-prem and private cloud deployments. It is also useful if you like to stay independent from your cloud provider. However, some providers also have managed services for Kafka. See AWS MSK for example.
-
RabbitMQ is recommended if you don’t have much load and you already have experience with this messaging system.
-
AWS SQS is a fully managed message queuing service from AWS. Useful if you plan to deploy Klyff on AWS.
-
Google Pub/Sub is a fully managed message queuing service from Google. Useful if you plan to deploy Klyff on Google Cloud.
-
Azure Service Bus is a fully managed message queuing service from Azure. Useful if you plan to deploy Klyff on Azure.
-
Confluent Cloud is a fully managed streaming platform based on Kafka. Useful for a cloud agnostic deployments.
See corresponding architecture page and rule engine page for more details.
Apache Kafka is an open-source stream-processing software platform. Configure Klyff environment file:
Check following line:
|
AWS SQS ConfigurationTo access AWS SQS service, you first need to create an AWS account. To work with AWS SQS service you will need to create your next credentials using this instruction:
Configure Klyff environment file:
Check following line:
Configure AWS SQS environment file for Klyff queue service:
Don’t forget to replace “YOUR_KEY”, “YOUR_SECRET” with your real AWS SQS IAM user credentials and “YOUR_REGION” with your real AWS SQS account region:
You can update default Rule Engine queues configuration using UI. More about Klyff Rule Engine queues see in documentation. |
Google Pub/Sub ConfigurationTo access Pub/Sub service, you first need to create an Google cloud account. To work with Pub/Sub service you will need to create a project using this instruction. Create service account credentials with the role “Editor” or “Admin” using this instruction, and save json file with your service account credentials step 9 here. Configure Klyff environment file:
Check following line:
Configure Pub/Sub environment file for Klyff queue service:
Don’t forget to replace “YOUR_PROJECT_ID”, “YOUR_SERVICE_ACCOUNT” with your real Pub/Sub project id, and service account (it is whole data from json file):
You can update default Rule Engine queues configuration using UI. More about Klyff Rule Engine queues see in documentation. |
Azure Service Bus ConfigurationTo access Azure Service Bus, you first need to create an Azure account. To work with Service Bus service you will need to create a Service Bus Namespace using this instruction. Create Shared Access Signature using this instruction. Configure Klyff environment file:
Check following line:
Configure Service Bus environment file for Klyff queue service:
Don’t forget to replace “YOUR_NAMESPACE_NAME” with your real Service Bus namespace name, and “YOUR_SAS_KEY_NAME”, “YOUR_SAS_KEY” with your real Service Bus credentials. Note: “YOUR_SAS_KEY_NAME” it is “SAS Policy”, “YOUR_SAS_KEY” it is “SAS Policy Primary Key”:
|
For installing RabbitMQ use this instruction. Configure Klyff environment file:
Check following line:
Configure RabbitMQ environment file for Klyff queue service:
Don’t forget to replace “YOUR_USERNAME” and “YOUR_PASSWORD” with your real user credentials, “localhost” and “5672” with your real RabbitMQ host and port:
|
Confluent Cloud ConfigurationTo access Confluent Cloud you should first create an account, then create a Kafka cluster and get your API Key. Configure Klyff environment file:
Check following line:
Configure Confluent Cloud environment file for Klyff queue service:
Don’t forget to replace “CLUSTER_API_KEY”, “CLUSTER_API_SECRET” and “confluent.cloud:9092” with your real Confluent Cloud bootstrap servers:
You can update default Rule Engine queues configuration using UI. More about Klyff Rule Engine queues see in documentation. |
Step 6. Enable monitoring (optional)
In order to start cluster monitoring - Grafana and Prometheus services, please edit configuration file:
1
nano .env
You’ll need to make sure that MONITORING_ENABLED
variable set to true
:
1
MONITORING_ENABLED=true
After deployment, you will be able to reach Prometheus at http://localhost:9090
and Grafana at http://localhost:3000
(default login is admin
and password foobar
).
Step 7. Running
Execute the following command to create log folders for the services and chown of these folders to the docker container users. To be able to change user, chown command is used, which requires sudo permissions (script will request password for a sudo access):
1
./docker-create-log-folders.sh
Execute the following command to run installation:
1
./docker-install-tb.sh --loadDemo
Where:
--loadDemo
- optional argument. Whether to load additional demo data.
Execute the following command to start services:
1
./docker-start-services.sh
After a while when all services will be successfully started you can open http://{your-host-ip}
in you browser (for ex. http://localhost
).
You should see Klyff login page.
Use the following default credentials:
- System Administrator: sysadmin@thingsboard.org / sysadmin
If you installed DataBase with demo data (using --loadDemo
flag) you can also use the following credentials:
- Tenant Administrator: tenant@thingsboard.org / tenant
- Customer User: customer@thingsboard.org / customer
In case of any issues you can examine service logs for errors. For example to see Klyff node logs execute the following command:
1
docker compose logs -f tb-core1 tb-rule-engine1
Or use the following command to see the state of all the containers:
1
docker compose ps
Use the following command to inspect the logs of all running services:
1
docker compose logs -f
See docker-compose logs command reference for details.
Execute the following command to stop services:
1
./docker-stop-services.sh
Execute the following command to stop and completely remove deployed docker containers:
1
./docker-remove-services.sh
Execute the following command to update particular or all services (pull newer docker image and rebuild container):
1
./docker-update-service.sh [SERVICE...]
Where:
[SERVICE...]
- list of services to update (defined in docker-compose configurations). If not specified all services will be updated.
Upgrading
In case when database upgrade is needed, edit .env file to set “TB_VERSION” to target version (e.g. set it to 3.8.1 if you are upgrading to the latest). Then, execute the following commands:
1
2
3
./docker-stop-services.sh
./docker-upgrade-tb.sh --fromVersion=[FROM_VERSION]
./docker-start-services.sh
Where FROM_VERSION
- from which version upgrade should be started. See Upgrade Instructions for valid fromVersion
values. Note, that you have to upgrade versions one by one (for example 3.6.1 -> 3.6.2 -> 3.6.3 etc).
Generate certificate for HTTPS
We using HAproxy for proxying traffic to containers and for web UI by default we using 80 and 443 ports. For using HTTPS with a valid certificate, execute these commands:
1
2
docker exec haproxy-certbot certbot-certonly --domain your_domain --email your_email
docker exec haproxy-certbot haproxy-refresh
NOTE: Valid certificate used only, when you visit web UI by domain in URL. If you using IP address for access to UI, this would be used self-signed certificate.
Next steps
-
Getting started guides - These guides provide quick overview of main Klyff features. Designed to be completed in 15-30 minutes.
-
Connect your device - Learn how to connect devices based on your connectivity technology or solution.
-
Data visualization - These guides contain instructions on how to configure complex Klyff dashboards.
-
Data processing & actions - Learn how to use Klyff Rule Engine.
-
IoT Data analytics - Learn how to use rule engine to perform basic analytics tasks.
-
Hardware samples - Learn how to connect various hardware platforms to Klyff.
-
Advanced features - Learn about advanced Klyff features.
-
Contribution and Development - Learn about contribution and development in Klyff.