Online health information (OHI) in dermatology often exceeds the recommended sixth-grade reading level, hindering patient comprehension. This study aimed to assess the utility of three artificial intelligence large language models (LLMs) - ChatGPT-3.5, ChatGPT-4, and Google Gemini - in enhancing the readability of OHI on generalized pustular psoriasis (GPP) while preserving the reliability and quality of the source material. Texts from the top 20 search results for GPP were reworded by LLMs to a sixth-grade level and evaluated using the enhanced DISCERN instrument and readability indices. Pairwise comparisons of means for each reading scale and DISCERN scores with Tukey's test were also performed. All LLMs significantly reduced readability (p<0.01) but scored lower on the DISCERN instrument compared to the original text (p<0.01). While LLMs improved readability, they did not preserve the original content's reliability and quality. These findings suggest hesitancy in using LLMs for dermatological patient education.