Artificial Intelligence in Scientific Research: Opportunities, Limits, and the Engineering Realization of the Data-Driven Paradigm
DOI:
https://doi.org/10.54097/frhf7579Keywords:
AI for Science; physics-aware machine learning; neural operator; physics-informed neural networks.Abstract
This paper presents a publication-oriented blueprint for using artificial intelligence in scientific research, written strictly in the format requested and developed as sustained paragraphs rather than bullet points. The central claim is that AI augments, rather than replaces, the scientific method when its use follows a sequence that prioritizes domain structure, leverages learning for acceleration, quantifies and calibrates uncertainty, and secures results through reproducibility and auditability. It articulate four research questions: where AI most productively intervenes within mathematics, physics, chemistry, and cross-disciplinary settings; how order-of-magnitude gains can be realized without violating conservation laws, symmetries, or boundary and initial conditions; how unified evaluation practices, calibration, out-of-distribution (OOD) testing, negative controls, independent verification, distinguish operational value from deceptively high scores; and how to operationalize the entire process as a closed loop connecting data, models, decisions, and new evidence. The analysis section develops, for each discipline, a coherent account of problem formulations, representative methods such as neural operators and physics-informed neural networks, evaluation practices that emphasize physical validity alongside error, characteristic failure modes and mitigations, and the rationale for combining learning with structures. The solutions section translates these ideas into engineering practice, describing structure encoding at architectural and objective levels, probabilistic reporting and calibration, robust OOD testing, closed-loop optimization with laboratory or simulation interfaces, and the governance elements that make findings portable, auditable, and energy-aware. The conclusion integrates these strands into design principles for research programs, courses, and deployments. Throughout, the prose is organized into continuous paragraphs under numbered headings, in line with academic style, while in-text citations follow the bracketed numbering convention.
Downloads
References
[1] Lu L, Jin P, Karniadakis G E. Learning nonlinear operators via DeepONet. Nature Machine Intelligence, 2021, 3: 218–229. DOI: https://doi.org/10.1038/s42256-021-00302-5
[2] Dutta A, Das S. Not just another survey on physics-informed neural networks (PINNs): Foundations, advances, and open problems. Journal of the ACM, 2025, 37(4).
[3] Alqarafi A, Batool H, Abbas T, Janjua J I, Ramay S A, Ahmed M. Estimating uncertainty in deep learning methods and applications. In: Proceedings of the 2024 International Conference on Computer and Applications (ICCA), 2024, pp. 1–6. IEEE. DOI: https://doi.org/10.1109/ICCA62237.2024.10928030
[4] Bengio Y, Lodi A, Prouvost A. Machine learning for combinatorial optimization: A methodological tour d’horizon. European Journal of Operational Research, 2021, 290(2): 405–421. DOI: https://doi.org/10.1016/j.ejor.2020.07.063
[5] Strait J D, Moran K R, Murph A C, Hyman J D, Viswanathan H S, Stauffer P H. Covariate-informed bifidelity bias correction of distributional output. SIAM/ASA Journal on Uncertainty Quantification, 2025, 13(3): 1616–1648. DOI: https://doi.org/10.1137/24M1690667
[6] Hauth J. Advances in intuitive priors and scalable algorithms for Bayesian deep neural network models in scientific applications. Doctoral dissertation, 2025.
[7] Bisram R. Predicting isotopologue counts from bulk unlabeled metabolomics data. The Cooper Union for the Advancement of Science and Art, 2023.
[8] Fang J, Gentine P. Exploring optimal complexity for water stress representation in terrestrial carbon models: A hybrid machine learning model approach. Journal of Advances in Modeling Earth Systems, 2024, 16(12): e2024MS004308. DOI: https://doi.org/10.1029/2024MS004308
[9] Aarrestad T, et al. Fast inference of deep neural networks in FPGAs for particle physics. Frontiers in Big Data, 2021, 4: 676580.
[10] Thais S, Calafiura P, Chachamis G, DeZoort G, Duarte J, Ganguly S, Kagan M, Murnane D, Neubauer M S, Terao K. Graph neural networks in particle physics: Implementations, innovations, and challenges. arXiv preprint arXiv:2203.12852, 2022.
[11] Saxena D, Cao J. Generative adversarial networks (GANs): Challenges, solutions, and future directions. ACM Computing Surveys (CSUR), 2021, 54(3): 1–42. DOI: https://doi.org/10.1145/3446374
[12] Wayo D D. Ensembles of graph and physics-informed machine learning for scientific modeling in materials science: A review. Archives of Computational Methods in Engineering, 2025: 1–26. DOI: https://doi.org/10.1007/s11831-025-10325-5
[13] Do B, Zhang R. Multi-fidelity Bayesian optimization in engineering design. arXiv preprint arXiv:2311.13050, 2023.
[14] MacLeod B P, Parlane F G L, Morrissey T D, et al. Self-driving laboratory for accelerated discovery of thin-film materials. Science Advances, 2020, 6(20): eaaz8867. DOI: https://doi.org/10.1126/sciadv.aaz8867
[15] Bronstein M M, Bruna J, Cohen T, Veličković P. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv:2104.13478, 2021.
[16] Hertwig R, Herzog S M, Kozyreva A. Blinding to circumvent human biases: Deliberate ignorance in humans, institutions, and machines. Perspectives on Psychological Science, 2024, 19(5): 849–859. DOI: https://doi.org/10.1177/17456916231188052
[17] Franklin A, Laymon R. Experimentation in physics in the 20th and 21st centuries. In: Oxford Research Encyclopedia of Physics, 2024. DOI: https://doi.org/10.1093/acrefore/9780190871994.013.132
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Highlights in Business, Economics and Management

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







