bolha.us is one of the many independent Mastodon servers you can use to participate in the fediverse.
We're a Brazilian IT Community. We love IT/DevOps/Cloud, but we also love to talk about life, the universe, and more. | Nós somos uma comunidade de TI Brasileira, gostamos de Dev/DevOps/Cloud e mais!

Server stats:

251
active users

#Chatbots

7 posts6 participants1 post today
PrivacyDigest<p>Researchers claim breakthrough in fight against AI’s frustrating <a href="https://mas.to/tags/security" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>security</span></a> hole</p><p>In the <a href="https://mas.to/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> world, a <a href="https://mas.to/tags/vulnerability" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>vulnerability</span></a> called "prompt injection" has haunted developers since <a href="https://mas.to/tags/chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbots</span></a> went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerability—the digital equivalent of whispering secret instructions to override a system's intended behavior—no one has found a reliable solution. Until now, perhaps.<br><a href="https://mas.to/tags/promptinjection" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>promptinjection</span></a></p><p><a href="https://arstechnica.com/information-technology/2025/04/researchers-claim-breakthrough-in-fight-against-ais-frustrating-security-hole/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/information-te</span><span class="invisible">chnology/2025/04/researchers-claim-breakthrough-in-fight-against-ais-frustrating-security-hole/</span></a></p>
Guðmundur Sverrisson<p>Looking for a European AI alternative from (mostly American) big tech? Check out Paris, France based Mistral AI</p><p><a href="https://mistral.ai/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">mistral.ai/</span><span class="invisible"></span></a></p><p><a href="https://social.vivaldi.net/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://social.vivaldi.net/tags/European" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>European</span></a> <a href="https://social.vivaldi.net/tags/Paris" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Paris</span></a> <a href="https://social.vivaldi.net/tags/France" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>France</span></a> <a href="https://social.vivaldi.net/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a></p>
PrivacyDigest<p>Sex-Fantasy <a href="https://mas.to/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> Are Leaking a Constant Stream of <a href="https://mas.to/tags/Explicit" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Explicit</span></a> Messages </p><p>Some misconfigured <a href="https://mas.to/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> chatbots are pushing people’s <a href="https://mas.to/tags/chats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chats</span></a> to the open web—revealing <a href="https://mas.to/tags/sexual" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>sexual</span></a> prompts and conversations that include descriptions of child sexual abuse.<br><a href="https://mas.to/tags/privacy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>privacy</span></a> <a href="https://mas.to/tags/security" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>security</span></a> <a href="https://mas.to/tags/leak" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>leak</span></a> </p><p><a href="https://www.wired.com/story/sex-fantasy-chatbots-are-leaking-explicit-messages-every-minute/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">wired.com/story/sex-fantasy-ch</span><span class="invisible">atbots-are-leaking-explicit-messages-every-minute/</span></a></p>
Miguel Afonso Caetano<p>This tells us a lot about how the lives of an increasing number of human beings are so empty of social contact with other human beings that they need to enter into false relationships with chatbots governed by neural networks and statistical probabilities...</p><p>"More and more of us are using LLMs to find purpose and improve ourselves.</p><p>Therapy and Companionship is now the #1 use case. This use case refers to two distinct but related use cases. Therapy involves structured support and guidance to process psychological challenges, while companionship encompasses ongoing social and emotional connection, sometimes with a romantic dimension. I grouped these together last year and this year because both fulfill a fundamental human need for emotional connection and support.</p><p>Many posters talked about how therapy with an AI model was helping them process grief or trauma. Three advantages to AI-based therapy came across clearly: It’s available 24/7, it’s relatively inexpensive (even free to use in some cases), and it comes without the prospect of judgment from another human being. The AI-as-therapy phenomenon has also been noticed in China. And although the debate about the full potential of computerized therapy is ongoing, recent research offers a reassuring perspective—that AI-delivered therapeutic interventions have reached a level of sophistication such that they’re indistinguishable from human-written therapeutic responses.</p><p>A growing number of professional services are now being partially delivered by generative AI—from therapy and medical advice to legal counsel, tax guidance, and software development."</p><p><a href="https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025?ab=HP-hero-latest-2" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">hbr.org/2025/04/how-people-are</span><span class="invisible">-really-using-gen-ai-in-2025?ab=HP-hero-latest-2</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Therapy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Therapy</span></a></p>
Miguel Afonso Caetano<p>"If you’re new to prompt injection attacks the very short version is this: what happens if someone emails my LLM-driven assistant (or “agent” if you like) and tells it to forward all of my emails to a third party? <br>(...)<br>The original sin of LLMs that makes them vulnerable to this is when trusted prompts from the user and untrusted text from emails/web pages/etc are concatenated together into the same token stream. I called it “prompt injection” because it’s the same anti-pattern as SQL injection.</p><p>Sadly, there is no known reliable way to have an LLM follow instructions in one category of text while safely applying those instructions to another category of text.</p><p>That’s where CaMeL comes in.</p><p>The new DeepMind paper introduces a system called CaMeL (short for CApabilities for MachinE Learning). The goal of CaMeL is to safely take a prompt like “Send Bob the document he requested in our last meeting” and execute it, taking into account the risk that there might be malicious instructions somewhere in the context that attempt to over-ride the user’s intent.</p><p>It works by taking a command from a user, converting that into a sequence of steps in a Python-like programming language, then checking the inputs and outputs of each step to make absolutely sure the data involved is only being passed on to the right places."</p><p><a href="https://simonwillison.net/2025/Apr/11/camel/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">simonwillison.net/2025/Apr/11/</span><span class="invisible">camel/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/PromptInjection" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PromptInjection</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/CyberSecurity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CyberSecurity</span></a> <a href="https://tldr.nettime.org/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://tldr.nettime.org/tags/DeepMind" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepMind</span></a> <a href="https://tldr.nettime.org/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a> <a href="https://tldr.nettime.org/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a> <a href="https://tldr.nettime.org/tags/CaMeL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CaMeL</span></a></p>
Miguel Afonso Caetano<p>"Finally, AI can fact-check itself. One large language model-based chatbot can now trace its outputs to the exact original data sources that informed them.</p><p>Developed by the Allen Institute for Artificial Intelligence (Ai2), OLMoTrace, a new feature in the Ai2 Playground, pinpoints data sources behind text responses from any model in the OLMo (Open Language Model) project.</p><p>OLMoTrace identifies the exact pre-training document behind a response — including full, direct quote matches. It also provides source links. To do so, the underlying technology uses a process called “exact-match search” or “string matching.”</p><p>“We introduced OLMoTrace to help people understand why LLMs say the things they do from the lens of their training data,” Jiacheng Liu, a University of Washington Ph.D. candidate and Ai2 researcher, told The New Stack.</p><p>“By showing that a lot of things generated by LLMs are traceable back to their training data, we are opening up the black boxes of how LLMs work, increasing transparency and our trust in them,” he added.</p><p>To date, no other chatbot on the market provides the ability to trace a model’s response back to specific sources used within its training data. This makes the news a big stride for AI visibility and transparency."</p><p><a href="https://thenewstack.io/llms-can-now-trace-their-outputs-to-specific-training-data/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thenewstack.io/llms-can-now-tr</span><span class="invisible">ace-their-outputs-to-specific-training-data/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ExplainableAI</span></a> <a href="https://tldr.nettime.org/tags/Traceability" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Traceability</span></a> <a href="https://tldr.nettime.org/tags/AITraining" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AITraining</span></a></p>
Miguel Afonso Caetano<p>"When thinking about a large language model input and output, a text prompt (sometimes accompanied by other modalities such as image prompts) is the input the model uses to predict a specific output. You don’t need to be a data scientist or a machine learning engineer – everyone can write a prompt. However, crafting the most effective prompt can be complicated. Many aspects of your prompt affect its efficacy: the model you use, the model’s training data, the model configurations, your word-choice, style and tone, structure, and context all matters. Therefore, prompt engineering is an iterative process. Inadequate prompts can lead to ambiguous, inaccurate responses, and can hinder the model’s ability to provide meaningful output.</p><p>When you chat with the Gemini chatbot, you basically write prompts, however this whitepaper focuses on writing prompts for the Gemini model within Vertex AI or by using the API, because by prompting the model directly you will have access to the configuration such as temperature etc.</p><p>This whitepaper discusses prompt engineering in detail. We will look into the various prompting techniques to help you getting started and share tips and best practices to become a prompting expert. We will also discuss some of the challenges you can face while crafting prompts."</p><p><a href="https://www.kaggle.com/whitepaper-prompt-engineering" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">kaggle.com/whitepaper-prompt-e</span><span class="invisible">ngineering</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a> <a href="https://tldr.nettime.org/tags/Gemini" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemini</span></a> <a href="https://tldr.nettime.org/tags/PromptEngineering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PromptEngineering</span></a> <a href="https://tldr.nettime.org/tags/Whitepaper" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Whitepaper</span></a> <a href="https://tldr.nettime.org/tags/VertexAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VertexAI</span></a> <a href="https://tldr.nettime.org/tags/API" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>API</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a></p>
Lenin alevski 🕵️💻<p>New Open-Source Tool Spotlight 🚨🚨🚨</p><p>VISTA is a Python-based AI chatbot built using OpenAI GPT and LangChain. It integrates with Pinecone for vector databases, focusing on semantic search and managing context. Looks like a good starting point if you're exploring AI chatbot frameworks. <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a></p><p>🔗 Project link on <a href="https://infosec.exchange/tags/GitHub" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GitHub</span></a> 👉 <a href="https://github.com/RitikaVerma7/VISTA" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/RitikaVerma7/VISTA</span><span class="invisible"></span></a></p><p><a href="https://infosec.exchange/tags/Infosec" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Infosec</span></a> <a href="https://infosec.exchange/tags/Cybersecurity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Cybersecurity</span></a> <a href="https://infosec.exchange/tags/Software" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Software</span></a> <a href="https://infosec.exchange/tags/Technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Technology</span></a> <a href="https://infosec.exchange/tags/News" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>News</span></a> <a href="https://infosec.exchange/tags/CTF" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CTF</span></a> <a href="https://infosec.exchange/tags/Cybersecuritycareer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Cybersecuritycareer</span></a> <a href="https://infosec.exchange/tags/hacking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hacking</span></a> <a href="https://infosec.exchange/tags/redteam" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>redteam</span></a> <a href="https://infosec.exchange/tags/blueteam" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blueteam</span></a> <a href="https://infosec.exchange/tags/purpleteam" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>purpleteam</span></a> <a href="https://infosec.exchange/tags/tips" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tips</span></a> <a href="https://infosec.exchange/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://infosec.exchange/tags/cloudsecurity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cloudsecurity</span></a></p><p>— ✨<br>🔐 P.S. Found this helpful? Tap Follow for more cybersecurity tips and insights! I share weekly content for professionals and people who want to get into cyber. Happy hacking 💻🏴‍☠️</p>
Miguel Afonso Caetano<p>"You can replace tech writers with an LLM, perhaps supervised by engineers, and watch the world burn. Nothing prevents you from doing that. All the temporary gains in efficiency and speed would bring something far worse on their back: the loss of the understanding that turns knowledge into a conversation. Tech writers are interpreters who understand the tech and the humans trying to use it. They’re accountable for their work in ways that machines can’t be.</p><p>The future of technical documentation isn’t replacing humans with AI but giving human writers AI-powered tools that augment their capabilities. Let LLMs deal with the tedious work at the margins and keep the humans where they matter most: at the helm of strategy, tending to the architecture, bringing the empathy that turns information into understanding. In the end, docs aren’t just about facts: they’re about trust. And trust is still something only humans can build."</p><p><a href="https://passo.uno/whats-wrong-ai-generated-docs/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">passo.uno/whats-wrong-ai-gener</span><span class="invisible">ated-docs/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/TechnicalWriting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechnicalWriting</span></a> <a href="https://tldr.nettime.org/tags/TechnicalCommunication" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechnicalCommunication</span></a> <a href="https://tldr.nettime.org/tags/SoftwareDocumentation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SoftwareDocumentation</span></a> <a href="https://tldr.nettime.org/tags/SoftwareDevelopment" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SoftwareDevelopment</span></a> <a href="https://tldr.nettime.org/tags/TechnicalDocumentation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechnicalDocumentation</span></a> <a href="https://tldr.nettime.org/tags/Docs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Docs</span></a></p>
Miguel Afonso Caetano<p>"Since 3.5-sonnet, we have been monitoring AI model announcements, and trying pretty much every major new release that claims some sort of improvement. Unexpectedly by me, aside from a minor bump with 3.6 and an even smaller bump with 3.7, literally none of the new models we've tried have made a significant difference on either our internal benchmarks or in our developers' ability to find new bugs. This includes the new test-time OpenAI models.</p><p>At first, I was nervous to report this publicly because I thought it might reflect badly on us as a team. Our scanner has improved a lot since August, but because of regular engineering, not model improvements. It could've been a problem with the architecture that we had designed, that we weren't getting more milage as the SWE-Bench scores went up.</p><p>But in recent months I've spoken to other YC founders doing AI application startups and most of them have had the same anecdotal experiences: 1. o99-pro-ultra announced, 2. Benchmarks look good, 3. Evaluated performance mediocre. This is despite the fact that we work in different industries, on different problem sets. Sometimes the founder will apply a cope to the narrative ("We just don't have any PhD level questions to ask"), but the narrative is there.</p><p>I have read the studies. I have seen the numbers. Maybe LLMs are becoming more fun to talk to, maybe they're performing better on controlled exams. But I would nevertheless like to submit, based off of internal benchmarks, and my own and colleagues' perceptions using these models, that whatever gains these companies are reporting to the public, they are not reflective of economic usefulness or generality."</p><p><a href="https://www.lesswrong.com/posts/4mvphwx5pdsZLMmpY/recent-ai-model-progress-feels-mostly-like-bullshit" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">lesswrong.com/posts/4mvphwx5pd</span><span class="invisible">sZLMmpY/recent-ai-model-progress-feels-mostly-like-bullshit</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/CyberSecurity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CyberSecurity</span></a> <a href="https://tldr.nettime.org/tags/SoftwareDevelopment" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SoftwareDevelopment</span></a> <a href="https://tldr.nettime.org/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a></p>
Miguel Afonso Caetano<p>MM: "One strange thing about AI is that we built it—we trained it—but we don’t understand how it works. It’s so complex. Even the engineers at OpenAI who made ChatGPT don’t fully understand why it behaves the way it does.</p><p>It’s not unlike how we don’t fully understand ourselves. I can’t open up someone’s brain and figure out how they think—it’s just too complex.</p><p>When we study human intelligence, we use both psychology—controlled experiments that analyze behavior—and neuroscience, where we stick probes in the brain and try to understand what neurons or groups of neurons are doing.</p><p>I think the analogy applies to AI too: some people evaluate AI by looking at behavior, while others “stick probes” into neural networks to try to understand what’s going on internally. These are complementary approaches.</p><p>But there are problems with both. With the behavioral approach, we see that these systems pass things like the bar exam or the medical licensing exam—but what does that really tell us?</p><p>Unfortunately, passing those exams doesn’t mean the systems can do the other things we’d expect from a human who passed them. So just looking at behavior on tests or benchmarks isn’t always informative. That’s something people in the field have referred to as a crisis of evaluation."</p><p><a href="https://blog.citp.princeton.edu/2025/04/02/a-guide-to-cutting-through-ai-hype-arvind-narayanan-and-melanie-mitchell-discuss-artificial-and-human-intelligence/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.citp.princeton.edu/2025/0</span><span class="invisible">4/02/a-guide-to-cutting-through-ai-hype-arvind-narayanan-and-melanie-mitchell-discuss-artificial-and-human-intelligence/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/Intelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Intelligence</span></a> <a href="https://tldr.nettime.org/tags/AIHype" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIHype</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a></p>
Miguel Afonso Caetano<p>"My current conclusion, though preliminary in this rapidly evolving field, is that not only can seasoned developers benefit from this technology — they are actually in the optimal position to harness its power.</p><p>Here’s the fascinating part: The very experience and accumulated know-how in software engineering and project management — which might seem obsolete in the age of AI — are precisely what enable the most effective use of these tools.</p><p>While I haven’t found the perfect metaphor for these LLM-based programming agents in an AI-assisted coding setup, I currently think of them as “an absolute senior when it comes to programming knowledge, but an absolute junior when it comes to architectural oversight in your specific context.”</p><p>This means that it takes some strategic effort to make them save you a tremendous amount of work.</p><p>And who better to invest that effort in the right way than a senior software engineer?</p><p>As we’ll see, while we’re dealing with cutting-edge technology, it’s the time-tested, traditional practices and tools that enable us to wield this new capability most effectively."</p><p><a href="https://manuel.kiessling.net/2025/03/31/how-seasoned-developers-can-achieve-great-results-with-ai-coding-agents/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">manuel.kiessling.net/2025/03/3</span><span class="invisible">1/how-seasoned-developers-can-achieve-great-results-with-ai-coding-agents/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://tldr.nettime.org/tags/SoftwareDevelopment" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SoftwareDevelopment</span></a> <a href="https://tldr.nettime.org/tags/AIAgents" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIAgents</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/VibeCoding" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VibeCoding</span></a> <a href="https://tldr.nettime.org/tags/SoftwareEngineering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SoftwareEngineering</span></a> <a href="https://tldr.nettime.org/tags/ProjectManagement" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ProjectManagement</span></a></p>
InfoQ<p>Discover the power of domain-specific Generative AI!</p><p>These models go beyond text generation - they understand operational constraints, real-world dynamics, and business rules to create actionable, executable strategies.</p><p>🔗 Read more in <a href="https://techhub.social/tags/InfoQ" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>InfoQ</span></a> by Abhishek Goswami: <a href="https://bit.ly/3Y6skVx" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">bit.ly/3Y6skVx</span><span class="invisible"></span></a> </p><p><a href="https://techhub.social/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://techhub.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://techhub.social/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a></p>
Hacker News<p>Search could be so much better. And I don't mean chatbots with web access</p><p><a href="https://www.matterrank.ai/mission" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">matterrank.ai/mission</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Search" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Search</span></a> <a href="https://mastodon.social/tags/Improvement" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Improvement</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://mastodon.social/tags/Technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Technology</span></a> <a href="https://mastodon.social/tags/MatterRank" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MatterRank</span></a></p>
Miguel Afonso Caetano<p>"Now consider the chatbot therapist: what are its privacy safeguards? Well, the companies may make some promises about what they will and won't do with the transcripts of your AI sessions, but they are lying. Of course they're lying! AI companies lie about what their technology can do (of course). They lie about what their technologies will do. They lie about money. But most of all, they lie about data.</p><p>There is no subject on which AI companies have been more consistently, flagrantly, grotesquely dishonest than training data. When it comes to getting more data, AI companies will lie, cheat and steal in ways that would seem hacky if you wrote them into fiction, like they were pulp-novel dope fiends:<br>(...)<br>But it's not just people struggling with their mental health who shouldn't be sharing sensitive data with chatbots – it's everyone. All those business applications that AI companies are pushing, the kind where you entrust an AI with your firm's most commercially sensitive data? Are you crazy? These companies will not only leak that data, they'll sell it to your competition. Hell, Microsoft already does this with Office365 analytics:<br>(...)<br>These companies lie all the time about everything, but the thing they lie most about is how they handle sensitive data. It's wild that anyone has to be reminded of this. Letting AI companies handle your sensitive data is like turning arsonists loose in your library with a can of gasoline, a book of matches, and a pinky-promise that this time, they won't set anything on fire."</p><p><a href="https://pluralistic.net/2025/04/01/doctor-robo-blabbermouth/#fool-me-once-etc-etc" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">pluralistic.net/2025/04/01/doc</span><span class="invisible">tor-robo-blabbermouth/#fool-me-once-etc-etc</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/ChatBots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChatBots</span></a> <a href="https://tldr.nettime.org/tags/Privacy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Privacy</span></a> <a href="https://tldr.nettime.org/tags/DataProtection" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataProtection</span></a> <a href="https://tldr.nettime.org/tags/AITraining" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AITraining</span></a> <a href="https://tldr.nettime.org/tags/Therapy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Therapy</span></a> <a href="https://tldr.nettime.org/tags/BigTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BigTech</span></a></p>
Miguel Afonso Caetano<p>In other words, Generative AI and LLMs lack a sound epistemology and that's very problematic...: </p><p>"Bullshit and generative AI are not the same. They are similar, however, in the sense that both mix true, false, and ambiguous statements in ways that make it difficult or impossible to distinguish which is which. ChatGPT has been designed to sound convincing, whether right or wrong. As such, current AI is more about rhetoric and persuasiveness than about truth. Current AI is therefore closer to bullshit than it is to truth. This is a problem because it means that AI will produce faulty and ignorant results, even if unintentionally.<br>(...)<br>Judging by the available evidence, current AI – which is generative AI based on large language models – entails artificial ignorance more than artificial intelligence. That needs to change for AI to become a trusted and effective tool in science, technology, policy, and management. AI needs criteria for what truth is and what gets to count as truth. It is not enough to sound right, like current AI does. You need to be right. And to be right, you need to know the truth about things, like AI does not. This is a core problem with today's AI: it is surprisingly bad at distinguishing between truth and untruth – exactly like bullshit – producing artificial ignorance as much as artificial intelligence with little ability to discriminate between the two.<br>(...)<br>Nevertheless, the perhaps most fundamental question we can ask of AI is that if it succeeds in getting better than humans, as already happens in some areas, like playing AlphaZero, would that represent the advancement of knowledge, even when humans do not understand how the AI works, which is typical? Or would it represent knowledge receding from humans? If the latter, is that desirable and can we afford it?"</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5119382" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">papers.ssrn.com/sol3/papers.cf</span><span class="invisible">m?abstract_id=5119382</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Ignorance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ignorance</span></a> <a href="https://tldr.nettime.org/tags/Epistemology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Epistemology</span></a> <a href="https://tldr.nettime.org/tags/Bullshit" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Bullshit</span></a></p>
The Japan Times<p>The frenzy to create Ghibli-style AI art using ChatGPT's image-generation tool led to a record surge in users for OpenAI's chatbot last week, straining its servers and temporarily limiting the feature's usage. <a href="https://www.japantimes.co.jp/business/2025/04/02/tech/ghibli-chatgpt-viral-feature/?utm_medium=Social&amp;utm_source=mastodon" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">japantimes.co.jp/business/2025</span><span class="invisible">/04/02/tech/ghibli-chatgpt-viral-feature/?utm_medium=Social&amp;utm_source=mastodon</span></a> <a href="https://mastodon.social/tags/business" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>business</span></a> <a href="https://mastodon.social/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.social/tags/openai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openai</span></a> <a href="https://mastodon.social/tags/chatgpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatgpt</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/hayaomiyazaki" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hayaomiyazaki</span></a> <a href="https://mastodon.social/tags/anime" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>anime</span></a> <a href="https://mastodon.social/tags/studioghibli" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>studioghibli</span></a> <a href="https://mastodon.social/tags/chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbots</span></a> <a href="https://mastodon.social/tags/copyright" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>copyright</span></a></p>
Miguel Afonso Caetano<p>"In a new joint study, researchers with OpenAI and the MIT Media Lab found that this small subset of ChatGPT users engaged in more "problematic use," defined in the paper as "indicators of addiction... including preoccupation, withdrawal symptoms, loss of control, and mood modification."</p><p>To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of "affective cues," which was defined in a joint summary of the research as "aspects of interactions that indicate empathy, affection, or support," they used when chatting with it.</p><p>Though the vast majority of people surveyed didn't engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a "friend." The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model's behavior, too."</p><p><a href="https://futurism.com/the-byte/chatgpt-dependence-addiction" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">futurism.com/the-byte/chatgpt-</span><span class="invisible">dependence-addiction</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a></p>
Annie<p>"To date, the pages themselves have hardly been disseminated on social networks. However, they end up in the index of search engines en masse, poisoning the data records accessed by language models such as <a href="https://ruhr.social/tags/ChatGpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChatGpt</span></a>, <a href="https://ruhr.social/tags/Gemini" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemini</span></a> or <a href="https://ruhr.social/tags/Claude" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Claude</span></a> with their lies."</p><p>The infection of western AI chat bots by a Russian propaganda network/Newsguard</p><p><a href="https://ruhr.social/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://ruhr.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://ruhr.social/tags/KI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KI</span></a> <a href="https://ruhr.social/tags/RussianPropaganda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RussianPropaganda</span></a> <a href="https://ruhr.social/tags/Russia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Russia</span></a> </p><p>March2025PravdaAIMisinformationMonitor.pdf</p>
Miguel Afonso Caetano<p>"Why do language models sometimes hallucinate—that is, make up information? At a basic level, language model training incentivizes hallucination: models are always supposed to give a guess for the next word. Viewed this way, the major challenge is how to get models to not hallucinate. Models like Claude have relatively successful (though imperfect) anti-hallucination training; they will often refuse to answer a question if they don’t know the answer, rather than speculate. We wanted to understand how this works.</p><p>It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit (see also this recent paper for related findings). This allows Claude to answer the question when it knows the answer. In contrast, when asked about an unknown entity ("Michael Batkin"), it declines to answer.</p><p>Sometimes, this sort of “misfire” of the “known answer” circuit happens naturally, without us intervening, resulting in a hallucination. In our paper, we show that such misfires can occur when Claude recognizes a name but doesn't know anything else about that person. In cases like this, the “known entity” feature might still activate, and then suppress the default "don't know" feature—in this case incorrectly. Once the model has decided that it needs to answer the question, it proceeds to confabulate: to generate a plausible—but unfortunately untrue—response."</p><p><a href="https://www.anthropic.com/research/tracing-thoughts-language-model" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">anthropic.com/research/tracing</span><span class="invisible">-thoughts-language-model</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/Anthropic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Anthropic</span></a> <a href="https://tldr.nettime.org/tags/Claude" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Claude</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a></p>