bolha.us is one of the many independent Mastodon servers you can use to participate in the fediverse.
We're a Brazilian IT Community. We love IT/DevOps/Cloud, but we also love to talk about life, the universe, and more. | Nós somos uma comunidade de TI Brasileira, gostamos de Dev/DevOps/Cloud e mais!

Server stats:

252
active users

#LLMs

30 posts28 participants4 posts today
RS, Author, Novelist, Prosaist<p><span class="h-card" translate="no"><a href="https://mastodon.au/@BenAveling" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>BenAveling</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.scot/@petealexharris" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>petealexharris</span></a></span> <span class="h-card" translate="no"><a href="https://circumstances.run/@davidgerard" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>davidgerard</span></a></span> <br>Next most probable word pattern matching isn't a brain; it's not artificially intelligent. It's more like a game of Mad Libs, and about as useful.</p><p><a href="https://eldritch.cafe/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://eldritch.cafe/tags/genAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>genAI</span></a> <a href="https://eldritch.cafe/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://eldritch.cafe/tags/llms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llms</span></a> <a href="https://eldritch.cafe/tags/chatGpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatGpt</span></a> <a href="https://eldritch.cafe/tags/writer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>writer</span></a> <a href="https://eldritch.cafe/tags/author" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>author</span></a> <a href="https://eldritch.cafe/tags/WritersOfMastodon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>WritersOfMastodon</span></a> <a href="https://eldritch.cafe/tags/WritingCommunity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>WritingCommunity</span></a></p>
Hacker News<p>Automating Interactive Fiction Logic Generation with LLMs in Emacs</p><p><a href="https://blog.tendollaradventure.com/automating-story-logic-with-llms/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.tendollaradventure.com/au</span><span class="invisible">tomating-story-logic-with-llms/</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Automating" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Automating</span></a> <a href="https://mastodon.social/tags/Interactive" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Interactive</span></a> <a href="https://mastodon.social/tags/Fiction" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Fiction</span></a> <a href="https://mastodon.social/tags/Logic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Logic</span></a> <a href="https://mastodon.social/tags/Generation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Generation</span></a> <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.social/tags/Emacs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Emacs</span></a> <a href="https://mastodon.social/tags/Storytelling" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Storytelling</span></a></p>
Evan Hahn<p>It's the end of March! I just published my roundup of links from the month. Highlights: essays about LLMs (mostly negative), a video guide to UI layout algorithms, and an anime recommendation. <a href="https://evanhahn.com/notes-from-march-2025/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">evanhahn.com/notes-from-march-</span><span class="invisible">2025/</span></a></p><p><a href="https://bigshoulders.city/tags/programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>programming</span></a> <a href="https://bigshoulders.city/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://bigshoulders.city/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://bigshoulders.city/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://bigshoulders.city/tags/Zelda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Zelda</span></a> <a href="https://bigshoulders.city/tags/blog" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blog</span></a></p>
Abraham Samma🔬🔭👨‍💻<p>The contradiction certain people (usually ones with a VC streak) in the AI space sell is that despite AI supposedly signalling an era of abundance, everything needs to be pay-walled and wealth concentrated, instead making everything open source. This is how you uncover the wolves in this space. <a href="https://toolsforthought.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://toolsforthought.social/tags/llms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llms</span></a></p>
रञ्जित (Ranjit Mathew)<p>Interesting to compare your own experiences with this:</p><p>“AI Blindspots”, Edward Z Yang (<a href="https://ezyang.github.io/ai-blindspots/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ezyang.github.io/ai-blindspots/</span><span class="invisible"></span></a>).</p><p>Via HN: <a href="https://news.ycombinator.com/item?id=43414393" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.ycombinator.com/item?id=4</span><span class="invisible">3414393</span></a></p><p>On Lobsters: <a href="https://lobste.rs/s/0h4nyk/ai_blindspots" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">lobste.rs/s/0h4nyk/ai_blindspo</span><span class="invisible">ts</span></a></p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.social/tags/VibeCoding" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VibeCoding</span></a> <a href="https://mastodon.social/tags/AIAssistedCoding" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIAssistedCoding</span></a> <a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/Confabulation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Confabulation</span></a> <a href="https://mastodon.social/tags/AIBlindspots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIBlindspots</span></a></p>
OddOpinions5<p><span class="h-card" translate="no"><a href="https://masto.es/@tunubesecamirio" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>tunubesecamirio</span></a></span> why is it that on mastodon 99% of posts about <a href="https://mas.to/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> or <a href="https://mas.to/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> or <a href="https://mas.to/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChatGPT</span></a> are snarky criticism</p><p>when in the real world, lots of serious people find AI useful</p><p>almost as if you and the rest of mastodon not connected to reality</p>
Alex Jimenez<p>Futureproofing your brand with creative <a href="https://mas.to/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> storytelling</p><p><a href="https://www.prdaily.com/futureproofing-your-brand-with-creative-ai-storytelling/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">prdaily.com/futureproofing-you</span><span class="invisible">r-brand-with-creative-ai-storytelling/</span></a></p><p><a href="https://mas.to/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://mas.to/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mas.to/tags/DigitalMarketing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DigitalMarketing</span></a></p>
Miguel Afonso Caetano<p>"Why do language models sometimes hallucinate—that is, make up information? At a basic level, language model training incentivizes hallucination: models are always supposed to give a guess for the next word. Viewed this way, the major challenge is how to get models to not hallucinate. Models like Claude have relatively successful (though imperfect) anti-hallucination training; they will often refuse to answer a question if they don’t know the answer, rather than speculate. We wanted to understand how this works.</p><p>It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit (see also this recent paper for related findings). This allows Claude to answer the question when it knows the answer. In contrast, when asked about an unknown entity ("Michael Batkin"), it declines to answer.</p><p>Sometimes, this sort of “misfire” of the “known answer” circuit happens naturally, without us intervening, resulting in a hallucination. In our paper, we show that such misfires can occur when Claude recognizes a name but doesn't know anything else about that person. In cases like this, the “known entity” feature might still activate, and then suppress the default "don't know" feature—in this case incorrectly. Once the model has decided that it needs to answer the question, it proceeds to confabulate: to generate a plausible—but unfortunately untrue—response."</p><p><a href="https://www.anthropic.com/research/tracing-thoughts-language-model" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">anthropic.com/research/tracing</span><span class="invisible">-thoughts-language-model</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/Anthropic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Anthropic</span></a> <a href="https://tldr.nettime.org/tags/Claude" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Claude</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a></p>
Miguel Afonso Caetano<p>"Anthropic's research found that artificially increasing the neurons' weights in the "known answer" feature could force Claude to confidently hallucinate information about completely made-up athletes like "Michael Batkin." That kind of result leads the researchers to suggest that "at least some" of Claude's hallucinations are related to a "misfire" of the circuit inhibiting that "can't answer" pathway—that is, situations where the "known entity" feature (or others like it) is activated even when the token isn't actually well-represented in the training data.</p><p>Unfortunately, Claude's modeling of what it knows and doesn't know isn't always particularly fine-grained or cut and dried. In another example, researchers note that asking Claude to name a paper written by AI researcher Andrej Karpathy causes the model to confabulate the plausible-sounding but completely made-up paper title "ImageNet Classification with Deep Convolutional Neural Networks." Asking the same question about Anthropic mathematician Josh Batson, on the other hand, causes Claude to respond that it "cannot confidently name a specific paper... without verifying the information.""</p><p><a href="https://arstechnica.com/ai/2025/03/why-do-llms-make-stuff-up-new-research-peers-under-the-hood/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/ai/2025/03/why</span><span class="invisible">-do-llms-make-stuff-up-new-research-peers-under-the-hood/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a></p>
𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕<p>🧵 …dass die KI Datenklau ist, hatte ich in dieser Tootreihe schon vor über einem Jahr hingewiesen. Nun scheint sogar die Boulevard Medien darauf aufmerksam zu werden.</p><p>»Datenklau für die KI – Meta bediente sich heimlich bei Schweizer Literatur:<br>Stehlen statt kaufen - Um sein KI-Modell Llama 3 zu füttern, bediente sich Meta offenbar illegal bei Millionen von Büchern - darunter Werke von Dürrenmatt und Suter.«</p><p>📰 <a href="https://www.blick.ch/digital/datenklau-fuer-die-ki-meta-bediente-sich-heimlich-bei-schweizer-literatur-id20714166.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">blick.ch/digital/datenklau-fue</span><span class="invisible">r-die-ki-meta-bediente-sich-heimlich-bei-schweizer-literatur-id20714166.html</span></a></p><p><a href="https://chaos.social/tags/ki" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ki</span></a> <a href="https://chaos.social/tags/datenklau" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>datenklau</span></a> <a href="https://chaos.social/tags/daten" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>daten</span></a> <a href="https://chaos.social/tags/literatur" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>literatur</span></a> <a href="https://chaos.social/tags/meta" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>meta</span></a> <a href="https://chaos.social/tags/llma" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llma</span></a> <a href="https://chaos.social/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://chaos.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://chaos.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a></p>
Ulrike Hahn<p>Join us for our next CCCM "The Cognitive Science of Generative AI" seminar:</p><p>"Foundation models of human cognition"<br>Marcel Binz,<br>Helmholtz, Munich <br>Tuesday, April 2nd, 16;00 BST, online</p><p>registration: <a href="https://psyc.bbk.ac.uk/cccm/cccm-seminar-series/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">psyc.bbk.ac.uk/cccm/cccm-semin</span><span class="invisible">ar-series/</span></a></p><p>Abstract: Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this – it is a unified system whose processes are deeply intertwined. In this talk, I will present my ongoing work on foundation models of human cognition: models that cannot only predict behavior in a single domain but that instead offer a truly universal take on our mind. Furthermore, I outline my vision for how to use such behaviorally predictive models to advance our understanding of human cognition, as well as how they can be scaled to naturalistic environments.</p><p><a href="https://fediscience.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://fediscience.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/cogsci" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>cogsci</span></a></span> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/philosophy" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>philosophy</span></a></span></p>
Paco Hope #resist<p>If anybody out there is working on using <a href="https://infosec.exchange/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> or <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> to analyze <a href="https://infosec.exchange/tags/security" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>security</span></a> events in AWS, I wonder if you're considering bullshit attacks via event injection. Let me explain. I'm openly musing about something I don't know much about.</p><p>You might be tempted to pipe a lot of EventBridge events into some kind of AI that analyzes them looking for suspicious events. Or you might hook up to CloudWatch log streams and read log entries from, say, your lambda functions looking for suspicious errors and output.</p><p>LLMs are going to be terrible at validating message authenticity. If you have a lambda that is doing something totally innocuous, but you make it <code>print()</code> some JSON that looks just like a GuardDuty finding, that JSON will end up in the lambda function's CloudWatch log stream. Then if you're piping CloudWatch Logs into an LLM, I don't think it will be smart enough to say "wait a minute, why is JSON that looks like a GuardDuty finding being emitted by this lambda function on its stdout?"</p><p>You and I would say "that's really weird. That JSON shouldn't be here in this log stream. Let's go look at what that lambda function is doing and why it's doing that." (Oh, it's Paco and he's just fucking with me) I think an LLM is far more likey to react "<em>Holy shit! there's a really terrible GuardDuty finding!</em> Light up the pagers! Red Alert!"</p><p>Having said this, I'm <strong>not</strong> doing this myself. I don't have any of my <a href="https://infosec.exchange/tags/AWS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AWS</span></a> logging streaming into any kind of <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a>. So maybe it's better than I think it is. But LLMs are notoriously bad at ignoring anything in their input stream. They tend to take it all at face value and treat it all as legit.</p><p>You might even try this with your <a href="https://infosec.exchange/tags/SIEM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SIEM</span></a> . Is it smart enough to ignore things that show up in the wrong context? Could you emit the JSON of an AWS security event in, say, a Windows Server Event Log that goes to your SIEM? Would it react as if that was a legit event? If you don't even use AWS, wouldn't it be funny if your SIEM responds to this JSON as if it was a big deal?</p><p>I'm just pondering this, and I'll credit the source: I'm evaluating an internal bedrock-based threat modelling tool and it spit out the phrase "EventBridge Event Injection." I thought "<strong>oh shit</strong> that's a whole class of issues I haven't thought about."</p>
IT News<p>Gemini hackers can deliver more potent attacks with a helping hand from… Gemini - In the growing canon of AI security, the indirect prompt injection has eme... - <a href="https://arstechnica.com/security/2025/03/gemini-hackers-can-deliver-more-potent-attacks-with-a-helping-hand-from-gemini/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/security/2025/</span><span class="invisible">03/gemini-hackers-can-deliver-more-potent-attacks-with-a-helping-hand-from-gemini/</span></a> <a href="https://schleuss.online/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>artificialintelligence</span></a> <a href="https://schleuss.online/tags/largelanguagemodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>largelanguagemodels</span></a> <a href="https://schleuss.online/tags/promptinjections" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>promptinjections</span></a> <a href="https://schleuss.online/tags/fun" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>fun</span></a>-tuning <a href="https://schleuss.online/tags/features" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>features</span></a> <a href="https://schleuss.online/tags/security" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>security</span></a> <a href="https://schleuss.online/tags/biz" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>biz</span></a>⁢ <a href="https://schleuss.online/tags/gemini" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemini</span></a> <a href="https://schleuss.online/tags/google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>google</span></a> <a href="https://schleuss.online/tags/llms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llms</span></a> <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a></p>
Metin Seven 🎨<p>𝗪𝗲𝗹𝗰𝗼𝗺𝗲 𝘁𝗼 𝘁𝗵𝗲 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗮𝗽𝗼𝗰𝗮𝗹𝘆𝗽𝘀𝗲</p><p>𝘚𝘵𝘶𝘥𝘪𝘰 𝘎𝘩𝘪𝘣𝘭𝘪 𝘴𝘵𝘺𝘭𝘦 𝘢𝘯𝘥 𝘵𝘩𝘦 𝘥𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘰𝘧 𝘮𝘦𝘢𝘯𝘪𝘯𝘨</p><p><a href="https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theintrinsicperspective.com/p/</span><span class="invisible">welcome-to-the-semantic-apocalypse</span></a></p><p><a href="https://graphics.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://graphics.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://graphics.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://graphics.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://graphics.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://graphics.social/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a> <a href="https://graphics.social/tags/ghibli" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ghibli</span></a> <a href="https://graphics.social/tags/StudioGhibli" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>StudioGhibli</span></a> <a href="https://graphics.social/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://graphics.social/tags/technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>technology</span></a> <a href="https://graphics.social/tags/BigTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BigTech</span></a></p>
Paco Hope #resist<p>I had this realisation today that people programming with <a href="https://infosec.exchange/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> are making something like homeopathic software. Homeopathic solutions are diluted so much that essentially no molecules of the supposedly important ingredient are present in the final product.</p><p>We have these systems at work where you can drag and drop a couple widgets onto a canvas, type a sentence or 2 as a prompt for an <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a>, and declare that to be an “app”. A colleague made a 2-widget app where you drop an excel spreadsheet-based questionnaire into it, and they wrote a single sentence that basically boiled down to “check the answers to the questions to see if they’re good.” Then the <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> does what AI does (make up bullshit).</p><p>The amount of novel intellect and contribution provided by my colleague is nearly zero. It’s an 80-billion parameter model, and they wrote an 80-syllable sentence. The training data is probably half a petabyte and their contribution was like 250 bytes. It feels homeopathic to me. How could that contribution be meaningful?</p><p>But they think they have done something. I have to be careful to gently point out that perhaps this is a complete and total waste of time, and just hint at the fact that maybe we all got a little dumber for attempting it.</p><p>I stand accused of many things. Subtlety is not one of them. I am going to struggle to be gentle.</p>
Paul Giulan<p><a href="https://federate.social/tags/Alibaba" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Alibaba</span></a> releases <a href="https://federate.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a> multimodal <a href="https://federate.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> model Qwen2.5-Omni-7B on <a href="https://federate.social/tags/HuggingFace" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HuggingFace</span></a> and <a href="https://federate.social/tags/GitHub" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GitHub</span></a></p><p><a href="https://www.cnbc.com/2025/03/27/alibaba-launches-open-source-ai-model-for-cost-effective-ai-agents.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">cnbc.com/2025/03/27/alibaba-la</span><span class="invisible">unches-open-source-ai-model-for-cost-effective-ai-agents.html</span></a></p><p><a href="https://federate.social/tags/China" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>China</span></a> <a href="https://federate.social/tags/CN" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CN</span></a> <a href="https://federate.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://federate.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://federate.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a></p>
Hacker News<p>I genuinely don't understand why some people are still bullish about LLMs</p><p><a href="https://twitter.com/skdh/status/1905132853672784121" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">twitter.com/skdh/status/190513</span><span class="invisible">2853672784121</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.social/tags/Debate" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Debate</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Future" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Future</span></a> <a href="https://mastodon.social/tags/Technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Technology</span></a> <a href="https://mastodon.social/tags/Discussion" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Discussion</span></a></p>
AI6YR Ben<p>This is why we can't have nice things.</p><p>"Lee recently launched Interview Coder, an "invisible" AI tool for job candidates to cheat on technical questions during coding interviews. Lee's startup sells access to the tool for $60 a month"</p><p>Columbia suspends student who created AI tool that helps people cheat in coding interviews<br><a href="https://www.msn.com/en-us/money/news/columbia-suspends-student-who-created-ai-tool-that-helps-people-cheat-in-coding-interviews/ar-AA1BMYHp" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">msn.com/en-us/money/news/colum</span><span class="invisible">bia-suspends-student-who-created-ai-tool-that-helps-people-cheat-in-coding-interviews/ar-AA1BMYHp</span></a></p><p><a href="https://m.ai6yr.org/tags/software" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>software</span></a> <a href="https://m.ai6yr.org/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://m.ai6yr.org/tags/llms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llms</span></a> <a href="https://m.ai6yr.org/tags/cheating" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cheating</span></a></p>
Hacker News<p>Parameter-Free KV Cache Compression for Memory-Efficient Long-Context LLMs</p><p><a href="https://arxiv.org/abs/2503.10714" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2503.10714</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ParameterFreeCompression" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ParameterFreeCompression</span></a> <a href="https://mastodon.social/tags/MemoryEfficiency" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MemoryEfficiency</span></a> <a href="https://mastodon.social/tags/LongContext" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LongContext</span></a> <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Research" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Research</span></a></p>
Daniel Requena 💻⌨️🇵🇹☕<p><a href="https://sourcegraph.com/blog/revenge-of-the-junior-developer" target="_blank" rel="nofollow noopener noreferrer" translate="no"><span class="invisible">https://</span><span class="ellipsis">sourcegraph.com/blog/revenge-o</span><span class="invisible">f-the-junior-developer</span></a></p><p><a href="https://bolha.us/tags/LLMs" class="mention hashtag" rel="tag">#<span>LLMs</span></a> <a href="https://bolha.us/tags/coding" class="mention hashtag" rel="tag">#<span>coding</span></a> <a href="https://bolha.us/tags/future" class="mention hashtag" rel="tag">#<span>future</span></a>?</p>