Installation

The Cube Builder depends essentially on:

Compatibility

Cube-Builder

BDC-Catalog

1.0.1

1.0.2

1.0.0

1.0.1

0.8.x

0.8.2

0.4.x, 0.6.x

0.8.1

0.2.x

0.2.x

Development Installation

Clone the software repository:

$ git clone https://github.com/brazil-data-cube/cube-builder.git

Go to the source code folder:

$ cd cube-builder

Install in development mode:

$ pip3 install -U pip "setuptools<67" wheel
$ pip3 install -e .[all]

Note

If you have problems with the librabbitmq installation, please, see [1].

Note

The setuptools v67+ has breaking changes related Pip versions requirements. For now, you should install setuptools<67 for compatibility. The packages in Cube-Builder will be upgraded to support latest version.

Running in Development Mode

Launch the RabbitMQ Container

You will need an instance of RabbitMQ up and running in order to launch the cube-builder celery workers.

We have prepared in the Cube Builder repository a configuration for RabbitMQ container with docker-compose. Please, follow the steps below:

docker-compose up -d mq

After that command, check which port was binded from the host to the container:

$ docker container ls

CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS                    NAMES
a3bb86d2df56   rabbitmq:3-management  "docker-entrypoint.s…"   3 minutes ago   Up 3 minutes   4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp   cube-builder-rabbitmq

Note

In the above output the RabbitMQ service is attached to the ports 5672 for socket client and 15672 for the RabbitMQ User Interface. You can check http://127.0.0.1:15672. The default credentials are guest and guest for user and password respectively.

Prepare the Database System

The Cube Builder uses BDC-DB as database definition to store data cube metadata.

Note

If you already have a database instance with the Brazil Data Cube data model, you can skip this section.

In order to proceed with installation, you will need PostgreSQL with PostGIS. We have already prepared a minimal instance in docker-compose.yml. You may use it as following:

docker-compose up -d postgres

We have prepared a script to configure the database model:

SQLALCHEMY_DATABASE_URI="postgresql://postgres:postgres@localhost/bdc" ./deploy/configure-db.sh

Launch the Cube Builder service

In the source code folder, enter the following command:

FLASK_ENV="development" \
WORK_DIR="/workdir" \
DATA_DIR="/data" \
SQLALCHEMY_DATABASE_URI="postgresql://postgres:postgres@localhost/bdc" \
cube-builder run

You may need to replace the definition of some environment variables:

  • FLASK_ENV="development": used to tell Flask to run in Debug mode.

  • WORK_DIR="/workdir": set path to store temporary cubes/processing.

  • DATA_DIR="/data": set path to store data cubes

  • SQLALCHEMY_DATABASE_URI="postgresql://postgres:postgres@localhost/bdc": set the database URI connection for PostgreSQL.

The above command should output some messages in the console as showed below:

* Environment: development
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 319-592-254

Launch the Cube Builder worker

Enter the following command to start Cube Builder worker:

WORK_DIR="/workdir" \
DATA_DIR="/data" \
SQLALCHEMY_DATABASE_URI="postgresql://postgres:postgres@localhost/bdc" \
celery -A cube_builder.celery.worker:celery worker -l INFO --concurrency 8 -Q default,merge-cube,prepare-cube,blend-cube,publish-cube

You may need to replace the definition of some parameters:

  • -l INFO: defines the Logging level. You may choose between DEBUG, INFO, WARNING, ERROR, CRITICAL, or FATAL.

  • --concurrency 8: defines the number of concurrent processes to generate of data cube. The default is the number of CPUs available on your system.

  • -Q default,merge-cube,prepare-cube,blend-cube,publish-cube: the list of Queues to be consumed by Cube-Builder in order to execute the tasks generation. You can set many workers to listen specific queues and set the maximum of threads to be executed in parallel.

Note

The command line cube-builder worker is an auxiliary tool that wraps celery command line using cube_builder as context. In this way, all celery worker parameters are currently supported. See more in Celery Workers Guide. If you keep parameters WORK_DIR and DATA_DIR, just make sure its writable in order to works, otherwise, you may see issues related Permission Denied.

Warning

The Cube Builder can use a lot of memory for each concurrent process, since it opens multiple images in memory. You can limit the concurrent processes with --concurrency NUMBER in order to prevent it.

Footnotes