/int/ - International

Vee haff wayz to make you post.

Eintragsmodus: Antworten [Zurück] [Gehe nach unten]

Betreff:
Säge:
Kommentar:
Zeichnung: x Zeichenfläche
Dateien:
Passwort: (Kommentarlöschung)
  • Erlaubte Dateitypen: GIF, JPG, PNG, NetzM, OGG, ZIP und mehr
  • Maximale Anzahl von Dateien pro Post: 4
  • Maximale Dateigröße pro Post: 100.00 MB
  • Lies die Regeln bevor du postest.

proxy LLMs and You Bernd 2025-07-22 13:55:30 Nr. 3635
Does anyone here use LLMs like ChatGPT or Grok? What do you use them for? Mock debates? Short stories? Discussing controversial ideas? Fact checking popular claims? Perhaps asking for an alternate perspective on a deep-seated belief? Just for entertainment? In my LLM drafted universe, I've devoured a willing-to-be-eaten Pop star, digested her, composted her remains into humanure, fertilized a garden with them and then had in-depth conversations about music with her ghost. While it is a simulation of conversation that doesn't reflect the actual's person's views and opinions especially if s/he's of any celebrity, these prompts can scratch the itch of a particular niche. Of course, there is the added risk of having fragments of your brain on a cloud that ultimately is not owned by you.
I use ChatGPT to deal with long texts, extracting quotes, summarizing paragraphs and such. It's alright, but sometimes it's badly hallucinating, so not that good, if you need accurate results. NotebookLM by Google seems to be better for analyzing texts and doesn't hallucinate much. I also used CoPilot for extracting quotes etc, was decent.
>>3638 ChatGPT works better if you use the o3 model. Anything else can produce questionable or flat out wrong responses. Haven't used NotebookLM or CoPilot.
>>3640 Can you even change the model nowadays? I don't see that option anymore. >Haven't used NotebookLM or CoPilot. I only used them in a work context. Well, when I had exhausted my ChatGPT tokens, I also used Deepseek shortly, but it's not great with text files.
>>3641 >Can you even change the model nowadays? I don't see that option anymore. It might be a paid-only feature. o3 is much more powerful than o4
Used it when it first came out. Got boring after couple hours. No interest in this AI shite since. Makes me cringe so hard when people say shit like "I asked Grok/ChatGTP/Claude".
ChatGPT has replaced back of the envelope calculations for me. I also find it pretty useful for introducing new topics, since you can ask for a list of reliable sources and the results will be quite good, often better than going on Wikipedia and looking at their references. Sometimes I ask for it to debug my code or discuss possible ways to implement something and AI is pretty much beyond improvement for that purpose, imo. I've tried roleplaying (with ChatGPT, Gemini, DeepSeek, the latter getting me the best results) but it doesn't come close to a human partner.
>>3646 >>3647 Here's something to ponder. Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, Pattie Maes This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.