BioASQ Participants Area
Test Results for MedProcNER Task
The evaluation measures indicating the performance of the systems that submitted results are presented below.+ NER
| Team Name | Run name | P | R | F1 |
|---|---|---|---|---|
| BIT.UA | run4-everything | 0.8095 | 0.7878 | 0.7985 |
| BIT.UA | run0-lc-dense-5-wVal | 0.8015 | 0.7878 | 0.7946 |
| BIT.UA | run1-lc-dense-5-full | 0.7954 | 0.7894 | 0.7924 |
| BIT.UA | run3-PlanTL-dense-bilstm-all-wVal | 0.7978 | 0.787 | 0.7923 |
| BIT.UA | run2-lc-bilstm-all-wVal | 0.7941 | 0.7823 | 0.7881 |
| Vicomtech | run1-xlm_roberta_large_dpa_e105 | 0.8054 | 0.7535 | 0.7786 |
| Vicomtech | run2-roberta_bio_es_dpa_e119 | 0.7679 | 0.7629 | 0.7653 |
| SINAI | run1-fine-tuned-roberta | 0.7631 | 0.7505 | 0.7568 |
| Vicomtech | run3-longformer_base_4096_bne_es | 0.7478 | 0.7588 | 0.7533 |
| SINAI | run4-fulltext-LSTM | 0.7538 | 0.7353 | 0.7444 |
| SINAI | run2-lstmcrf-512 | 0.7786 | 0.7043 | 0.7396 |
| SINAI | run5-lstm-BIO | 0.7705 | 0.7049 | 0.7362 |
| KFU NLP Team | predicted_task1 | 0.7192 | 0.7403 | 0.7296 |
| SINAI | run3-fulltext-GRU | 0.7396 | 0.711 | 0.725 |
| Fusion | run4-Spanish-RoBERTa | 0.7165 | 0.7143 | 0.7154 |
| Fusion | run3-XLM-RoBERTA-Clinical | 0.7047 | 0.6916 | 0.6981 |
| NLP-CIC-WFU | Hard4BIO_RoBERTa_postprocessing | 0.7188 | 0.654 | 0.6849 |
| NLP-CIC-WFU | Hard4BIO_RoBERTa | 0.7132 | 0.6507 | 0.6805 |
| Fusion | run1-BioMBERT-NumberTagOnly | 0.6948 | 0.6599 | 0.6769 |
| Fusion | run2-BioMBERT-FullPrep | 0.6894 | 0.6599 | 0.6743 |
| Fusion | run5-Adapted-ALBERT | 0.6928 | 0.6264 | 0.658 |
| NLP-CIC-WFU | Lazy4BIO_RoBERTa_postprocessing | 0.6301 | 0.6002 | 0.6148 |
| Onto-NLP | run1-bsс-bio-ehr-pharmaconer-voting-filtered | 0.7425 | 0.4374 | 0.5505 |
| Onto-NLP | run1-bsc-bio-ehr-es-pharmaconer-voting | 0.7397 | 0.4374 | 0.5497 |
| Samy Ateia | run2-gpt-4 | 0.6355 | 0.3874 | 0.4814 |
| saheelmayekar | predicted_data | 0.3975 | 0.535 | 0.4561 |
| Onto-NLP | run1-pharmaconer_filtered_with_exact_match | 0.3296 | 0.6104 | 0.428 |
| Samy Ateia | run1-gpt3.5-turbo | 0.523 | 0.2106 | 0.3002 |
+ Entity Linking
| Team Name | Run name | P | R | F1 |
|---|---|---|---|---|
| Vicomtech | run1-xlm_roberta_large_dpa_e105_sapbert | 0.5902 | 0.5525 | 0.5707 |
| Vicomtech | run2-roberta_bio_es_dpa_e119_sapbert | 0.5665 | 0.5627 | 0.5646 |
| Vicomtech | run3-roberta_bio_es_dpa_e119_sapbert_condition | 0.5662 | 0.5625 | 0.5643 |
| Vicomtech | run5-longformer_base_4096_bne_es_sapbert | 0.5498 | 0.558 | 0.5539 |
| Fusion | run4-Spanish-RoBERTa_predictions | 0.5377 | 0.5362 | 0.5369 |
| Fusion | run1-BioMBERT-NumberTagOnly_XLMRSapBERT.tsv | 0.5432 | 0.516 | 0.5293 |
| Fusion | run3-XLM-RoBERTA-XLMRSapBERT | 0.5332 | 0.5235 | 0.5283 |
| SINAI | run1-fine-tuned-roberta | 0.531 | 0.5224 | 0.5267 |
| Vicomtech | run4-roberta_bio_es_dpa_e119_sapbert_cross_encoder | 0.5248 | 0.5213 | 0.523 |
| Fusion | run2-BioMBERT-FullPrep_XLMRSapBERT | 0.5332 | 0.5105 | 0.5216 |
| Fusion | run5-Adapted-ALBERT_predictions | 0.5461 | 0.4939 | 0.5187 |
| SINAI | run2-lstmcrf-512 | 0.5455 | 0.4936 | 0.5183 |
| SINAI | run5-lstm-BIO | 0.5352 | 0.4898 | 0.5115 |
| SINAI | run4-fulltext-LSTM | 0.5173 | 0.5047 | 0.5109 |
| SINAI | run3-fulltext-GRU | 0.5079 | 0.4884 | 0.498 |
| KFU NLP Team | predicted_task2 | 0.3917 | 0.4033 | 0.3974 |
| Onto-NLP | run1-pharmaconer-top1 | 0.2742 | 0.508 | 0.3562 |
| Onto-NLP | run1-pharmaconer-voter | 0.2723 | 0.5044 | 0.3536 |
| Onto-NLP | run1-cantemist-top1 | 0.2642 | 0.4895 | 0.3432 |
| Onto-NLP | run1-ehr-top1 | 0.263 | 0.4873 | 0.3416 |
| BIT.UA | run4-everything | 0.3211 | 0.3126 | 0.3168 |
| BIT.UA | run3-PlanTL-dense-bilstm-all-wVal | 0.3188 | 0.3145 | 0.3166 |
| BIT.UA | run0-lc-dense-5-wVal | 0.318 | 0.3126 | 0.3153 |
| BIT.UA | run1-lc-dense-5-full | 0.3143 | 0.3121 | 0.3132 |
| BIT.UA | run2-lc-bilstm-all-wVal | 0.3133 | 0.3087 | 0.311 |
| Samy Ateia | run2-gpt-4 | 0.4304 | 0.1282 | 0.1976 |
| Samy Ateia | run1-gpt-3.5-turbo | 0.4051 | 0.0749 | 0.1264 |
+ Document Indexing
| Team Name | Run name | P | R | F1 |
|---|---|---|---|---|
| Vicomtech | run5_roberta_bio_es_dpa_e119_sapbert_condition | 0.619 | 0.6295 | 0.6242 |
| Vicomtech | run4_xlm_roberta_large_dpa_e105_sapbert | 0.6371 | 0.6109 | 0.6239 |
| Vicomtech | run1_roberta_bio_es_dpa_e119_sapbert | 0.6182 | 0.6295 | 0.6238 |
| Vicomtech | run3_longformer_base_4096_bne_es_sapbert | 0.6039 | 0.6288 | 0.6161 |
| Vicomtech | run2_roberta_bio_es_dpa_e119_sapbert_cross_encoder | 0.5885 | 0.5917 | 0.5901 |
| KFU NLP Team | predicted_task3 | 0.4805 | 0.5054 | 0.4927 |
| BIT.UA | run3-PlanTL-dense-bilstm-all-wVal | 0.3544 | 0.3654 | 0.3598 |
| BIT.UA | run4-everything | 0.3551 | 0.3619 | 0.3585 |
| BIT.UA | run0-lc-dense-5-wVal | 0.3517 | 0.3619 | 0.3567 |
| BIT.UA | run1-lc-dense-5-full | 0.3475 | 0.3612 | 0.3542 |
| BIT.UA | run2-lc-bilstm-all-wVal | 0.3484 | 0.3593 | 0.3537 |
| Samy Ateia | run2-gpt-4 | 0.5266 | 0.1811 | 0.2695 |
| Samy Ateia | run1-gpt3.5-turbo | 0.506 | 0.1083 | 0.1785 |