نبذة مختصرة : The parameter that captures the similarity among disciplinary categories is a key quantity of many measures of interdisciplinarity. This study evaluates the feasibility of using large language models to estimate this parameter rather than using traditional methods based on citational networks among disciplines. An experimental procedure tested the precision, agreement, resilience, robustness, and explainability of estimates from OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. The experiment collected a sample of 228 similarity matrices among two disciplinary taxonomies, for a total of 16,200 sampled estimate values. The experiment concludes that Gemini reaches precise estimates, comparable to traditional methods. ChatGPT stands out only for its superior resilience when dealing with semantically trivial changes in how disciplines are described. Claude resulted in a balanced profile. While rarely in full agreement, all three models undertake the estimation task sufficiently well.
No Comments.