Scan workflow

Scan workflow

Frontend API Part (frontend_api/uwsgi+hug)

  1. A new scan object is created in PostgreSQL database.
  2. Files are uploaded to the WEB API, stored on Filesystem and registered in PostgreSQL database.
  3. Scan is launched, an asynchronous task is launched on Frontend celery.

Frontend Celery Part (frontend_app/celery)

  1. Used probes are filtered according to scan options (selected probes, mimetype filtering).
  2. Empty results are created in PostgreSQL database (one per probe per file).
  3. Each file is uploaded to SFTP server.
  4. For each file uploaded a scan task on Brain is launched with the file probelist (according to scan option force some results could already be present).

Brain Celery Part (scan_app/celery)

  1. A new scan object is created in SQLite database to track jobs (for canceling).
  2. Each file is send for analysis in sent for analysis in every probe selected (each time a probe is available in IRMA, it registers itself to the brain and open a RabbitMQ Queue named with its probe name, probe list is retrieved by listing active queues).
  3. Two callbacks are set on every probe scan tasks, one for success and the other for failure.

Probe Celery Part (probe_app/celery)

  1. Scan task is received with a file id.
  2. File is downloaded as temporary file.
  3. File is scanned by the probe.
  4. Results are sent back to Brain to one of the two callbacks set.

Brain Celery Part (result_app/celery)

  1. successful results are marked as completed in SQLite database.
  2. successful results are forwarded to Frontend.
  3. error are marked as completed in SQLite database.
  4. As there is no result, an error message is generated to tell the Frontend the particular job for the file and probe failed.

Frontend Celery Part (frontend_app/celery)

  1. Results is received for each file and probe.
  2. Results are updated in PostgreSQL database.
  3. If scan is finished, a scan flush task is launched on Brain to delete files on SFTP server.