Cross-trait learning with a canonical transformer tops custom attention in genotype–phenotype mapping

We added standard transformer components, omitted by Rijal et al. (2025) in their attention-based genotype–phenotype mapping. We found this addition substantially boosts predictive accuracy on their yeast dataset.

Published May 1, 2025
DOI: 10.57844/arcadia-bmb9-fzxd
Additional assets:

The full pub is available here.

The source code to generate it is available in this GitHub repo (DOI: 10.5281/zenodo.15320438).

In the future, we hope to host notebook pubs directly on PubPub. Until that’s possible, we’ll create stubs like this with key metadata like the DOI, author roles, citation information, and an external link to the pub itself.


Social1

This publication has no references.

This publication does not have a feedback form.

Loading comments …