Jon Ippolito<p>Viewing large language models as compression machines helps explain why they seem at turns big-brained and pea-brained. To learn more, join me for Honey, AI Shrunk the Archive, an IEEE/UMaine AI webinar next Thursday 3 April at noon Eastern</p><p><a href="https://ai.umaine.edu/webinars" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ai.umaine.edu/webinars</span><span class="invisible"></span></a></p><p><a href="https://digipres.club/tags/AIethics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIethics</span></a> <a href="https://digipres.club/tags/AIEdu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIEdu</span></a> <a href="https://digipres.club/tags/AIinEducation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIinEducation</span></a> <a href="https://digipres.club/tags/AIliteracy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIliteracy</span></a> <a href="https://digipres.club/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChatGPT</span></a> <a href="https://digipres.club/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a></p>