bolha.us is one of the many independent Mastodon servers you can use to participate in the fediverse.
We're a Brazilian IT Community. We love IT/DevOps/Cloud, but we also love to talk about life, the universe, and more. | Nós somos uma comunidade de TI Brasileira, gostamos de Dev/DevOps/Cloud e mais!

Server stats:

251
active users

#explainableai

0 posts0 participants0 posts today

"Finally, AI can fact-check itself. One large language model-based chatbot can now trace its outputs to the exact original data sources that informed them.

Developed by the Allen Institute for Artificial Intelligence (Ai2), OLMoTrace, a new feature in the Ai2 Playground, pinpoints data sources behind text responses from any model in the OLMo (Open Language Model) project.

OLMoTrace identifies the exact pre-training document behind a response — including full, direct quote matches. It also provides source links. To do so, the underlying technology uses a process called “exact-match search” or “string matching.”

“We introduced OLMoTrace to help people understand why LLMs say the things they do from the lens of their training data,” Jiacheng Liu, a University of Washington Ph.D. candidate and Ai2 researcher, told The New Stack.

“By showing that a lot of things generated by LLMs are traceable back to their training data, we are opening up the black boxes of how LLMs work, increasing transparency and our trust in them,” he added.

To date, no other chatbot on the market provides the ability to trace a model’s response back to specific sources used within its training data. This makes the news a big stride for AI visibility and transparency."

thenewstack.io/llms-can-now-tr

The New Stack · Breakthrough: LLM Traces Outputs to Specific Training DataAi2’s OLMoTrace uses string matching to reveal the exact sources behind chatbot responses

Applications are now open for the 2025 International Semantic Web Research Summer School - #ISWS2025
in Bertinoro, Italy, from June 8-14, 2025
Topic: Knowledge Graphs for Reliable AI
Application Deadline: March 25, 2025
Webpage: 2025.semanticwebschool.org/

Great keynote speakers: Frank van Harmelen (VU), Natasha Noy (Google), Enrico Motta (KMI)

#semanticweb #knowledgegraphs #AI #generativeAI #responsibleAI #explainableAI #reliableAI @albertmeronyo @AxelPolleres @lysander07

Befuddled by all the recent #DeepSeek hullabaloo? Here's a brief Q&A that cuts through the fog.

Q: Did #DeepSeek just up-end everything we know about #AImodels and #LLMs?
A: Nope. It just demonstrates one of several new approaches to model training and logic chaining, but still uses the same basic building blocks.

Q: Does this mean DeepSeek can think?
A: Nope. Still not #Skynet. Logic chains are just one of several techniques an instruction-oriented AI system can use to try to stay on track and focus on a coherent goal.

Q: Is logic chaining #ExplainableAI?
A: Nope. Even the "thinking" output of DeepSeek is a linguistic approximation of the pattern-seeking behavior of most LLMs.

Q: Why is everyone in an uproar about DeepSeek?
A: Because most people think ChatGPT defines what AI is, what it can do, and what its limitations are.

Q: Why are the people panicking about DeepSeek talking about AI hegemony and geopolitics?
A: Because they're more concerned with investment returns or charging for expensive GPUs and SaaS services than they are in scientific advancements or improving individual productivity with new technology.