1 |
Babuji, Y., Woodard, A., Li, Z., Katz, D. S., Clifford, B., Kumar, R., Lacinski, L., Chard, R., Wozniak, J. M. , Foster, I., Wilde, M., and Chard, K. 2019. Parsl: Pervasive Parallel Programming in Python. In Proceedings of the 28th International Symposium on High-Performance Parallel and Distributed Computing (HPDC '19). Association for Computing Machinery, New York, NY, USA, 25–36. https://doi.org/10.1145/3307681.3325400
|
|
2 |
Choi, Hyeong Kyu, and Yixuan Li. "Picle: Eliciting diverse behaviors from large language models with persona in-context learning." arXiv preprint arXiv:2405.02501 (2024).
|
|
3 |
de Oliveira, Daniel, Ji Liu, and Esther Pacitti. Data-Intensive Workflow Management. Springer International Publishing, 2019.
|
|
4 |
Di Tommaso, P., Chatzou, M., Floden, E. W., Barja, P. P., Palumbo, E., & Notredame, C. (2017). Nextflow enables reproducible computational workflows. Nature biotechnology, 35(4), 316-319.
|
|
5 |
Dong, Qingxiu, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li and Zhifang Sui. “A Survey on In-context Learning.” Conference on Empirical Methods in Natural Language Processing (2022).
|
|
6 |
Duque, A., Syed, A., Day, K. V., Berry, M. J., Katz, D. S., & Kindratenko, V. V. (2023). Leveraging large language models to build and execute computational workflows. arXiv preprint arXiv:2312.07711.
|
|
7 |
Koziolek, H., Grüner, S., Hark, R., Ashiwal, V., Linsbauer, S., & Eskandani, N. (2024, April). LLM-based and retrieval-augmented control code generation. In Proceedings of the 1st International Workshop on Large Language Models for Code (pp. 22-29).
|
|
8 |
Rocklin, Matthew. "Dask: Parallel computation with blocked algorithms and task scheduling." SciPy. 2015.
|
|
9 |
Paiva, L., Assis, G., Amorim, A., Dias, L. G., Paes, A., Oliveira, D. . (2025). Domínio delimitado, Ódio exposto: O uso de prompts para identificação de discurso de Ódio online com LLMs. In SBBD’25, Fortaleza, Brasil
|
|
10 |
Sänger, M., De Mecquenem, N., Lewińska, K. E., Bountris, V., Lehmann, F., Leser, U., & Kosch, T. (2024). A qualitative assessment of using ChatGPT as large language model for scientific workflow development. GigaScience, 13, giae030.
|
|
11 |
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
|
|
12 |
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V. & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35, 24824-24837.
|
|
13 |
Xu, J., Du, W., Liu, X., & Li, X. (2024, October). Llm4workflow: An llm-based automated workflow model generation tool. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering (pp. 2394-2398).
|
|
14 |
Yildiz, O., & Peterka, T. (2024). Do Large Language Models Speak Scientific Workflows?. arXiv preprint arXiv:2412.10606.
|
|
15 |
Zhang, X., Xie, Y., Huang, J., Ma, J., Pan, Z., Liu, Q., Xiong, Z., Ergen, T., Shim, D., Lee, H., & Mei, Q. (2024). Massw: A new dataset and benchmark tasks for ai-assisted scientific workflows. arXiv preprint arXiv:2406.06357.
|
|