BioASQ Participants Area
Test Results for MultiCardioNER Task
The evaluation measures indicating the performance of the systems that submitted results are presented below.
+ Track 1
Team Name | Run name | P | R | F1 |
BIT.UA | run1-all-full | 0.8155 | 0.8243 | 0.8199 |
BIT.UA | run0-top5-full | 0.811 | 0.8181 | 0.8145 |
Enigma | 3-system-CLIN-X-ES-pretrained | 0.8016 | 0.8082 | 0.8049 |
Enigma | 2-system-CLIN-X-ES-14 | 0.8052 | 0.8007 | 0.803 |
PICUSLab | aug_fus_sub2 | 0.7794 | 0.803 | 0.791 |
BIT.UA | run4-all | 0.7981 | 0.7827 | 0.7903 |
Enigma | 1-system-CLIN-X-ES-12 | 0.7827 | 0.7938 | 0.7882 |
PICUSLab | aug_fus_sub1 | 0.7346 | 0.7799 | 0.7566 |
BIT.UA | run3-all-val | 0.7544 | 0.7588 | 0.7566 |
BIT.UA | run2-best-val | 0.748 | 0.7542 | 0.7511 |
DataScienceTUW | run4-roberta-dg | 0.6565 | 0.7376 | 0.6947 |
DataScienceTUW | run5-roberta-dg-windows | 0.6546 | 0.7244 | 0.6877 |
Siemens | run1_SDR | 0.6758 | 0.6437 | 0.6593 |
PICUSLab | aug_fus_sm_sub2 | 0.8919 | 0.4897 | 0.6323 |
DataScienceTUW | run1_mdeberta-ct-mlm-dg | 0.5928 | 0.6715 | 0.6297 |
PICUSLab | aug_fus_sm_sub1 | 0.8886 | 0.4744 | 0.6185 |
DataScienceTUW | run2-mdeberta-ct | 0.5027 | 0.6884 | 0.581 |
DataScienceTUW | run3_mdeberta-ct-dg | 0.48 | 0.6773 | 0.5618 |
NOVALINCS | 1_bsc-bio-ehr-es_distemist_4 | 0.8018 | 0.3525 | 0.4897 |
NOVALINCS | 2_bsc-bio-ehr-es_distemist_1 | 0.8183 | 0.3398 | 0.4802 |
+ Track 2 - ES
Team Name | Run name | P | R | F1 |
ICUE | run2_single_pp | 0.9146 | 0.9412 | 0.9277 |
ICUE | run4_GPT_translation | 0.9146 | 0.9412 | 0.9277 |
ICUE | run5_GPT_translation_all | 0.9146 | 0.9412 | 0.9277 |
Enigma | 3-system-SpanishRoBERTa | 0.913 | 0.9348 | 0.9238 |
Enigma | 1-system-XLMR | 0.904 | 0.9208 | 0.9123 |
Enigma | 2-system-XLMR-filtering | 0.9148 | 0.9005 | 0.9076 |
ICUE | run3_single | 0.8777 | 0.9272 | 0.9018 |
Siemens | run1_SMR | 0.8928 | 0.8778 | 0.8852 |
ICUE | run1_multilingual_pp | 0.8287 | 0.9348 | 0.8786 |
Enigma | 5-system-XLMR-filtering-dict2 | 0.7654 | 0.8871 | 0.8218 |
NOVALINCS | 3_bsc-bio-ehr-es_drugtemist_4 | 0.9242 | 0.4965 | 0.646 |
NOVALINCS | 4_bsc-bio-ehr-es_drugtemist_1 | 0.9076 | 0.4919 | 0.638 |
DataScienceTUW | run3_roberta-ct-multilingual | 0.8705 | 0.4342 | 0.5794 |
Enigma | 4-system-XLMR-filtering-dict1 | 0.4351 | 0.7899 | 0.5611 |
DataScienceTUW | run5_roberta-ct-mlm | 0.8421 | 0.3912 | 0.5342 |
DataScienceTUW | run4_mdeberta_ct_mlm_dg | 0.6815 | 0.3836 | 0.4909 |
DataScienceTUW | run2_mdeberta-ct-multilingual | 0.7647 | 0.3556 | 0.4855 |
DataScienceTUW | run1_mdeberta-multilingual | 0.3914 | 0.1531 | 0.2201 |
+ Track 2 - EN
Team Name | Run name | P | R | F1 |
Enigma | 3-system-BioLinkBERT | 0.8981 | 0.9477 | 0.9223 |
ICUE | run2_single_pp | 0.9086 | 0.9128 | 0.9107 |
ICUE | run4_GPT_translation | 0.9086 | 0.9128 | 0.9107 |
Enigma | 1-system-XLMR | 0.8823 | 0.9233 | 0.9023 |
Enigma | 2-system-XLMR-filtering | 0.9031 | 0.8989 | 0.901 |
Enigma | 5-system-XLMR-filtering-dict2 | 0.8698 | 0.9047 | 0.8869 |
ICUE | run3_single | 0.8734 | 0.8977 | 0.8854 |
ICUE | run1_multilingual_pp | 0.8314 | 0.9343 | 0.8799 |
Siemens | run1_EMR | 0.8685 | 0.8791 | 0.8738 |
Enigma | 4-system-XLMR-filtering-dict1 | 0.8298 | 0.921 | 0.873 |
ICUE | run5_GPT_translation_all | 0.8767 | 0.8635 | 0.87 |
DataScienceTUW | run3_roberta-ct-multilingual | 0.8632 | 0.4364 | 0.5797 |
DataScienceTUW | run4-mdeberta-windows | 0.7955 | 0.4317 | 0.5597 |
DataScienceTUW | run5-biobert-mlm-windows | 0.6771 | 0.441 | 0.5341 |
DataScienceTUW | run2_mdeberta-ct-multilingual | 0.8453 | 0.3777 | 0.5221 |
DataScienceTUW | run1_mdeberta-multilingual | 0.5648 | 0.2481 | 0.3448 |
+ Track 2 - IT
Team Name | Run name | P | R | F1 |
Enigma | 1-system-XLMR | 0.884 | 0.8844 | 0.8842 |
Enigma | 3-system-Italian-Spanish-RoBERTa | 0.8723 | 0.8956 | 0.8838 |
Enigma | 2-system-XLMR-filtering | 0.9016 | 0.8606 | 0.8806 |
Siemens | run1_IMR | 0.8891 | 0.8689 | 0.8789 |
ICUE | run4_GPT_translation | 0.9114 | 0.8461 | 0.8776 |
ICUE | run5_GPT_translation_all | 0.9114 | 0.8461 | 0.8776 |
ICUE | run2_single_pp | 0.8186 | 0.9 | 0.8574 |
ICUE | run1_multilingual_pp | 0.8139 | 0.8867 | 0.8487 |
ICUE | run3_single | 0.7879 | 0.8894 | 0.8356 |
Enigma | 4-system-XLMR-filtering-dict1 | 0.5693 | 0.8578 | 0.6844 |
Enigma | 5-system-XLMR-filtering-dict2 | 0.5707 | 0.845 | 0.6813 |
DataScienceTUW | run3_roberta-ct-multilingual | 0.8264 | 0.4206 | 0.5574 |
DataScienceTUW | run4-mdeberta | 0.7481 | 0.3928 | 0.5151 |
DataScienceTUW | run5-biobit-mlm | 0.7922 | 0.3517 | 0.4871 |
DataScienceTUW | run2_mdeberta-ct-multilingual | 0.7433 | 0.3394 | 0.4661 |
DataScienceTUW | run1_mdeberta-multilingual | 0.5074 | 0.2094 | 0.2965 |