نبذة مختصرة : Background: The use of artificial intelligence for psychological advice shows promise for enhancing accessibility and reducing costs, but it remains unclear whether AI-generated advice can match the quality and empathy of experts. Method: In a blinded, comparative cross-sectional design, licensed psychologists and psychotherapists assessed the quality, empathy, and authorship of psychological advice, which was either AI-generated or authored by experts. Results: AI-generated responses were rated significantly more favorable for emotional (OR = 1.79, 95 % CI [1.1, 2.93], p = .02) and motivational empathy (OR = 1.84, 95 % CI [1.12, 3.04], p = .02). Ratings for scientific quality (p = .10) and cognitive empathy (p = .08) were comparable to expert advice. Participants could not distinguish between AI-and expert-authored advice (p = .27), but perceived expert authorship was associated with more favorable ratings across these measures (ORs for perceived AI vs. perceived expert ranging from 0.03 to 0.15, allp < .001). For overall preference, AI-authored advice was favored when assessed blindly based on its actual source (beta = 6.96, p = .002). Nevertheless, advice perceived as expert-authored was also strongly preferred (beta = 6.26, p = .001), with 93.55 % of participants preferring the advice they believed came from an expert, irrespective of its true origin. Conclusions: AI demonstrates potential to match expert performance in asynchronous written psychological advice, but biases favoring perceived expert authorship may hinder its broader acceptance. Mitigating these biases and evaluating AI's trustworthiness and empathy are important next steps for safe and effective integration of AI in clinical practice.
No Comments.