BioASQ Participants Area

BioASQ - Task Synergy

Task Synergy version 2 begins on Monday, May 10, 2021!

What's new in Task Synergy compared to Task b?

Other notes on Task Synergy



BioASQ Synergy Guidelines

In BioASQ Synergy, a number of questions will be posed by our group of experts and answering them from the desgnated version of the CORD-19 Dataset will constitute the first round of the task.

A selection of your results will be provided as gold standard (reference) items before the second round of questions, and can be used as training data for the second or any subsequent round. These results will be without provenance of which system(s) they were submitted by, but annotated by the group of experts.
This process, of providing annotated results (feedback) along with the persisting and/or new questions, will be applied before each round.

Please note that either before the first round, or at any subsequent point, the CORD-19 Dataset can be used by the systems for training purposes, whether by using the metadata CSVs we will be providing or in any other format available for you to access the dataset at the Allen AI Insitute website.

In each round, BioASQ Synergy will provide test questions, in English, along with gold standard (reference) items to the questions of the previous round (from round 2 onwards), if any. The test questions are being constructed by a team of biomedical experts from around Europe.

Unlike classic BioASQ challenges, in BioASQ Synergy there will be no phases, but there will be Rounds. In each round we will release test questions and systems will respond with relevant articles (in English, from the designated metadata CSV of the CORD-19 Dataset ) and relevant snippets from the title or abstract of the relevant articles only. Please note, that in this version of the task, the fulltext of the articles, even if available, is not considered.

Questions may persist from round to round either intact (if the biomedical experts are unsatisfied from previous responses) or modified / versioned (if the biomedical experts are further informed from previous results). Some of the questions may come from the TREC-COVID dataset which will be offered to the experts as a starting point. Two experts may pose the same question in a round, therefore some questions may be included in the testsets twice (with different ids). This is to capture the case that the two experts may have different feedback for the same response submimtted by the participating systems for the same question.

If a question is designated as "ready to answer" then systems will respond with exact answers (e.g., named entities in the case of factoid questions) and ideal answers (paragraph-sized summaries), both in English.

There will be a total of four rounds. Systems may participate in any or, ideally, all rounds.

The rest of the guidelines is organized in sections. You can expand a section by clicking on it.

+ Types of questions

+ Required Answers in Task Synergy

+ Test dataset and evaluation process

+ Designated resources for Synergy

+ JSON format of the datasets

+ Systems

An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition : George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artiéres, Axel Ngonga, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos and Georgios Paliouras, in BMC bioinformatics , 2015 (bib) .