Benchmarking solutions within MARVEL

Benchmarking image

Benchmarking can be defined as the way to measure and compare performance metrics or business processes with the current state-of-the-art. The MARVEL framework aims at going beyond the state-of-the-art on different aspects of handling, analysing and managing heterogeneous data, especially of audio-visual (AV) nature. Therefore, there is a need to use benchmarking approaches to check and demonstrate the improvements reached by MARVEL.

MARVEL aims to assess its results from different perspectives: societal, business, technical, academic, and usability. Focusing on the technical perspective, we distinguish between assessing the performance metrics of the individual components of the MARVEL architecture and those of the different pipelines consisting of several components. The former is more straightforward, as most of the component owners know the current state of the art of their own technologies, existing benchmarks or performance studies, and the metrics and KPIs they aim to achieve. On the other hand, the latter is not so easy, as the goal is assessing a combination of components working together to achieve specific usage scenarios, related somehow not only to the technical performance, but to the business goals of the pilots. In this case, the use of benchmarks should consider existing efforts in benchmarking communities, although finding off-the-shelf solutions is in all probably impossible in most of the cases.

Our benchmarking methodology

To kick-off the benchmarking process, ATOS suggested a methodology and an initial strategy for benchmarking in deliverable D1.2.  The methodology starts by collecting business objectives, KPIs, and identifying existing processes of the MARVEL architecture in relation to the different pilot scenarios. This work has led to mapping the scenarios with generic pipelines and architectural blueprints to enable the comparison with similar initiatives. The mapping to pipelines and blueprints has been done using the DataBench project framework, with the clear benefit of creating an alignment with existing initiatives such as the Big Data Value Reference Model, NIST, or other projects from the BDV PPP portfolio, among others. This alignment has been reported extensively in deliverable D1.3 and is invaluable to understand the main MARVEL processes that should be benchmarked or assessed.  

Once the individual components of the architecture, pipelines, blueprints and KPIs are clear, the methodology suggests looking closely at exiting relevant benchmarking or performance assessment approaches for micro- and application benchmarking. This survey is expected to find suitable benchmarks, but also potential gaps for specific technologies (micro-benchmarks) and especially for benchmarking pipelines of many components (application benchmarks).

These are just the first steps carried out in MARVEL related to benchmarking. More specifically, a workshop on benchmarking MARVEL solutions will be held around the end of 2021 to bootstrap this process.

What are the benchmarks you apply? What do you know about benchmarking complex solutions? Any ideas you would like to share?

Feel free to reach out using the MARVEL contact form or to find and talk to us on Twitter and LinkedIn.

Signed by: Tomás Pariente Lobo from the ATOS team

Key Facts

  • Project Coordinator: Dr. Sotiris Ioannidis
  • Institution: Foundation for Research and Technology Hellas (FORTH)
  • E-mail: marvel-info@marvel-project.eu 
  • Start: 01.01.2021
  • Duration: 36 months
  • Participating Organisations: 17
  • Number of countries: 12

Get Connected

Funding

eu FLAG

This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under grant agreement No 957337. The website reflects only the view of the author(s) and the Commission is not responsible for any use that may be made of the information it contains.