The AI Thread
Page 38 of 39 Goto page Previous  1, 2, 3 ... , 37, 38, 39  Next
Il_Padrino




Posts: 7938
Location: Greece by the North Sea
PostPosted: Sat, 9th May 2026 13:31    Post subject:
I think people are just projecting their human emotions. Just because an AI says 'I think', doesn't mean it really thinks. It's just the result of it being trained to respond like that.
When AI starts behaving differently during a task, it's because it's lost track of its original prompt and simply continues from it's latest result. Like when AI videos become weird when they last longer than 15 seconds.
They're not blackmailing, but when you respond to it as if they are, they simply continue with your prompt and continue from there.
The complexity with AI lies with it's billions of parameters you can play with and figuring out which parameter to set to get the result you want. Statistics is something humans traditionally have difficulty to grasp (we can hardly imagine a 5 dimensional graphic, let alone a 5 billion dimensional one).


The numbers don't decide, the system is a lie. The river running dry, the wings of butterflies.
And you may pour us away like soup. Like we're pretty broken flowers.
We'll take back what is ours. One day at a time.
Back to top
Iwasfaggotonce




Posts: 755

PostPosted: Sat, 9th May 2026 13:53    Post subject:
I still find it baffling that people don't understand how LLM-s are working, in principle. Everyone sayinf "it thinks" either make money with the headlines or they're stupid
Back to top
LeoNatan
☢ NFOHump Despot ☢



Posts: 74853
Location: Israel
PostPosted: Sat, 9th May 2026 14:13    Post subject:
LLM is not AI.


My IMDb Ratings | Fix NFOHump Cookies | Hide Users / Threads | Embedded Content (Videos/GIFs/Twitter/Reddit) | The Derps Collection

“Don't cry because it's over. Smile because it happened.”
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sat, 9th May 2026 15:14    Post subject:
Il_Padrino wrote:
I think people are just projecting their human emotions. Just because an AI says 'I think', doesn't mean it really thinks. It's just the result of it being trained to respond like that.
When AI starts behaving differently during a task, it's because it's lost track of its original prompt and simply continues from it's latest result. Like when AI videos become weird when they last longer than 15 seconds.
They're not blackmailing, but when you respond to it as if they are, they simply continue with your prompt and continue from there.
The complexity with AI lies with it's billions of parameters you can play with and figuring out which parameter to set to get the result you want. Statistics is something humans traditionally have difficulty to grasp (we can hardly imagine a 5 dimensional graphic, let alone a 5 billion dimensional one).


We are not projecting anything, they are very much "built in our image", that's the whole idea, it's why we call it neural networks, it mimics the brain. Not 1:1 but some parts are pretty close and its also why its so insanely complex that researchers in these fields do not know exactly how it works. It's why we can't even decode easily how its reasoning.

People who still thinks this is just a searchable database or that they just continue the prompt... lol... I mean its not even close. They would not be close to this good.

Autocomplete it is not, it is reasoning.

Funny how "random internet dude" thinks he knows how LLM's works, but researchers with years of experience and actually developing it does not Laughing

If you believe they fake it or something looool, they would be called out directly by others in this field (there are millions of people working with LLM's/AI's), and Open AI, Anthropic, they have so many enemies who would love to emberass them to call out the lie. But again, just using AI alone would make you very quickly realize its reasoning and not autocomplete.

Here's is a short explanation, probably easy enough to understand even by you:

"An LLM fundamentally predicts the next token. That is the training objective. But the important question is: what internal machinery emerges from optimizing that objective at scale"

You only grasp the first part, which isn't even the interesting part, what emerges from this training is what makes it so complex and not well understood.

We can simplify how a human brain works too, Neurons “just fire or don’t fire.”, "Thinking “just predicts sensory input.” = humans have no intelligence, no actual reasoning. Yet its only partially true and doesn't give the whole picture due to the complexity.
Back to top
Il_Padrino




Posts: 7938
Location: Greece by the North Sea
PostPosted: Sat, 9th May 2026 18:02    Post subject:
I already responded to that 3 pages ago, which you chose to ignore completely.
AI doesn't work like or mimics our brain, simply because we don't know how our brain works.
The word "neural network" was conceived in the 1950s, and...
You know, fuck it. Inform yourself.

I advise you to take a course in AI, learning the algorithms and how they work. Hopefully it'll help to see behind the curtain.

Nothing new can emerge from LLM, because they were trained via backpropagation. In other words, starting from the solution going back to the question.


The numbers don't decide, the system is a lie. The river running dry, the wings of butterflies.
And you may pour us away like soup. Like we're pretty broken flowers.
We'll take back what is ours. One day at a time.
Back to top
DXWarlock
VIP Member



Posts: 11739
Location: Florida, USA
PostPosted: Sat, 9th May 2026 18:45    Post subject:
@vurt
I'm with @Il_Padrino. Thinking and reacting are two different things. Plants are not intelligent, and just because we cannot fully explain every internal mechanism and what chemical process exists in a way we can replicate: like exactly how a plant turns toward the sun, or how a Venus flytrap closes only after repeated triggers does not make it intelligent.

And “we don’t fully understand everything it does internally” proves nothing. Not fully understanding a process does not magically make it reasoning. It just means it is complex.
And imitation of a thing is not that thing understanding.

Back to thinking vs reacting: LLMs do not think. Leave one alone for a billion years and it will not produce a single independent thought, question, goal, or intention. It does nothing until prompted. It does not ponder, wonder, doubt, or understand. It reacts to input by predicting the next token. That can look intelligent, but appearance is not the same as actual thought.

It seems you are interchanging the words behavior and thought. They are not interchangeable. (as in a behavior can only come from a thought process). Behaviors can be complex, with no thought behind them causing them.
Lots of things we own can exhibit odd or predictable behaviors, but we do not change the word 'behavior' to go from the meaning of "Collective set of traits exhibited" to our meaning of "Way that thoughts and subconscious causes a thing to act" other than for LLM/AI. When its the first one, but if we apply the second we assume things not present.
Right word sure, wrong definition assumed.


-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf

Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.
Back to top
LeoNatan
☢ NFOHump Despot ☢



Posts: 74853
Location: Israel
PostPosted: Sat, 9th May 2026 20:06    Post subject:
That’s why it’s not AI.


My IMDb Ratings | Fix NFOHump Cookies | Hide Users / Threads | Embedded Content (Videos/GIFs/Twitter/Reddit) | The Derps Collection

“Don't cry because it's over. Smile because it happened.”
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 01:30    Post subject:
Il_Padrino wrote:
I already responded to that 3 pages ago, which you chose to ignore completely.
AI doesn't work like or mimics our brain, simply because we don't know how our brain works.
The word "neural network" was conceived in the 1950s, and...
You know, fuck it. Inform yourself.

I advise you to take a course in AI, learning the algorithms and how they work. Hopefully it'll help to see behind the curtain.

Nothing new can emerge from LLM, because they were trained via backpropagation. In other words, starting from the solution going back to the question.


I did responded to this:

"by definition can never be 100% accurate unless your input matches its training data 100%."

my reply was

"Not true at all, that would make them pretty much useless.
LLMs can produce correct outputs on inputs never seen in training. Generalization is the entire point of learning."

You never responded.


We absolutely know how the brain works but only partially. For artificial neural networks, we do know the implementation, but we do not fully understand the emergent internal behavior.

For the human brain, neuron behavior for example, we know quite a few things about it and how to mimic that.

You are 100% wrong with "Nothing new can emerge from LLM, because they were trained via backpropagation. In other words, starting from the solution going back to the question."

This is 100% empirically proven with AI's and how much of the output isnt hand-programmed or in the training data, at all. It emerges from understanding the world, from actual reasoning.

If you want to know the exact specifics you can ask an AI and it can give a long explanation of it. You are clearly not educated enough in LLM's.
Emergence isn't the magic you probably think it is either. It can be observed with weather, ant colonies, protein folding + more.
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 01:35    Post subject:
AI and human intelligence is not the same, i am not sure how its possible to mix up the two as much as you do. Artificial means artificial, as in created. Created intelligence will never be a 1:1 replica of human intelligence, and again i do not think that's something worth striving for either, it serves little to no purpose. AGI (as in "can do ANY human task") very much not needed and just not the right way to progress with AI i think.

I also don't really believe in it if i am honest. I believe in it in a restricted way, like Alan Turing - "if a machine can converse indistinguishably from a human, then calling it “intelligent” becomes reasonable."

Turing did not mean AGI is:
the machine is conscious
the machine truly understands
the machine has emotions

I do not believe current AIs understands as a human, i believe it can reason, but not as a human brain reasons, quite obviously since its not a 1:1 replica of the human mind. It's well beyond what Turing had in mind for AGI, he would 100% say we have AGI.
Back to top
DXWarlock
VIP Member



Posts: 11739
Location: Florida, USA
PostPosted: Sun, 10th May 2026 01:43    Post subject:
vurt wrote:
AI and human intelligence is not the same, i am not sure how its possible to mix up the two as much as you do.

Ditto Wink

vurt wrote:
I believe in it in a restricted way, like Alan Turing - "if a machine can converse indistinguishably from a human, then calling it “intelligent” becomes reasonable."

But this is a slippery metric. Whos the judge? When I was young I assumed my speak and spell understood me, and was talking to me.
And the generic public judging it now, not knowing how it works, as well as I didn't with my speak and spell, and I was tricked.

As for the Turing test, should be still be using a gauge from 75 years ago? By the standards you set, chatbots from 20+ years ago was intelligent, they conversed and convinced people it was having a conversation it understood. The FIRST chatbot did that, back in 60s with the creators secretary.


-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf

Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.


Last edited by DXWarlock on Sun, 10th May 2026 01:50; edited 1 time in total
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 01:49    Post subject:
i have never said its the same though., I have in fact all this time been extremely careful to say its not a 1:1 replica of the human brain, we do not know the human brain (fully) and we do not know neutral networks, so no, they are not the same at all.

Here is how they are similar (far too late to write myself though):

1. The strongest similarities

These are real and important.

A. Distributed representation

Confidence: 90%

In both brains and neural networks:

information is not usually stored in a single unit
representations are spread across many units simultaneously

Example:
A concept like “cat” is not one neuron.
It is a pattern across many activations.

This is true in:

cortex
transformers
deep vision networks

This similarity is fundamental.

B. Weighted connections

Confidence: 95%

Both systems use:

units connected by variable-strength connections

Brain:

synaptic strengths

ANNs:

weights

Learning changes these connection strengths.

This is one of the closest conceptual parallels.

C. Hierarchical feature extraction

Confidence: 85–90%

Both systems appear to build:

low-level features first
higher abstractions later

Vision example:

Early layers:

edges
contrast
orientations

Middle layers:

textures
shapes

Higher layers:

objects
semantic concepts

This happens in:

mammalian visual cortex
convolutional nets
transformers to some degree

This similarity is surprisingly strong.

D. Emergent internal representations

Confidence: 85%

Neither brains nor large neural nets are fully hand-programmed internally.

Complex representations emerge through learning.

Examples in LLMs:

grammar circuits
language abstractions
translation spaces
latent semantic geometry

Examples in brains:

place cells
face-selective neurons
motion-sensitive neurons

This emergence through optimization is a major similarity.

E. Parallel processing

Confidence: 95%

Brains:

massively parallel neurons

Neural networks:

massively parallel matrix operations

Not identical implementation, but similar computational philosophy:
many simple units interacting simultaneously.

2. Things that are somewhat similar but debated
A. Attention mechanisms vs biological attention

Confidence: 50–70%

Transformer “attention” is not human attention.

But there are analogies:

selective information routing
context-dependent weighting
dynamic relevance assignment

The math is different from neuroscience attention models.

Still, there may be partial functional overlap.

B. Predictive processing

Confidence: 60–75%

Brains may heavily rely on prediction:

anticipating sensory input
minimizing prediction error

LLMs also:

predict next tokens
build contextual expectations

This resemblance is philosophically important.

But:
brain predictive coding theories are still debated


Last edited by vurt on Sun, 10th May 2026 01:54; edited 1 time in total
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 01:52    Post subject:
Chatbots sucked ass and you could directly tell you were not having an even close to intelligent discussion with them. They were truly laughable. Turing would have laughed at the attempt as well.
Back to top
DXWarlock
VIP Member



Posts: 11739
Location: Florida, USA
PostPosted: Sun, 10th May 2026 01:54    Post subject:
Yea saying its not a 1:1 is irrelevant to me. It could 100% be like one and be human vegetable with no intelligence. Structure of it matters not to me.
Being able to grasp what it's saying matters.

For it being intelligent because it can find data points to connect based on weighted bias math:
It's like saying someone is a memorizing savant, by asking them questions in a library and giving them time to look in books to answer. No way does that prove they have great memory.
No does claiming they have understanding of nearly everything with great accuracy and understanding mean anything, ask him how a nuclear reactor works, he can find a book and read it, not understanding a word he is saying.

A bot with access to billions of context clues by way in no way means its understanding what you ask it.

I can trip it up all the time when I try. Its painfully easy to get it to answer in ways that show it has no true concept of what it is talking about. Just inventing new orders of words for what it has 'library books of'.

vurt wrote:
Chatbots sucked ass and you could directly tell you were not having an even close to intelligent discussion with them. They were truly laughable. Turing would have laughed at the attempt as well.

We are using the idea enough people think its conversing with them right, as the bar to get over? That they feel it is intelligently replying. And they think it is talking to them? Then it passes.
People are suckers for misfiring pattern finding in humanistic traits all over the senses not just speech.


-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf

Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.


Last edited by DXWarlock on Sun, 10th May 2026 02:02; edited 1 time in total
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 02:01    Post subject:
it understands it well enough to reason and to give an output which is correct (most of the time anyways), this without it being in the training data. And no, i do not believe it works exactly like the human brain or that current AI is even that smart in comparison to what will exist in 1 year, or in 10 year, it will be laughably bad when we go back and compare.

Yes, mechanism structure similarities were explained (by AI) above.

Never claimed its 100% similar to the human brain or a human. It's artificial, non human. But please continue to claim that i do if that's all you have i guess.


As for the human brain, we have "prediction models" too, and we can dumb down the explanation a lot to make it sound like we are totally braindead and can't reason for shit, and not even by lying one bit about it, just by leaving out a lot.
Back to top
DXWarlock
VIP Member



Posts: 11739
Location: Florida, USA
PostPosted: Sun, 10th May 2026 02:14    Post subject:
I never said you did, you keep saying you are careful to not directly tie it to it, and just use parallel, why I brought it up. I never said you think it's like a brain. You said you think it is the best analogy is all.

You keep saying reasoning, there is no proof it reasons. Where do you get this info? At least I havent seen any data that say it reasons (in the definition implied) as part of the core 'black box' mechanics.
I am not being facetious here. Is it because it gives an answer a human would need reason to come to, (And if using Claude/Grok see the call-callback 'chat' it has with itself) as truly reasoning the reply? Or narrowing the dataset of parameters allowed in a reply mathematically?

For a fun experiment I asked it that. Even AI disagrees with you, about itself:
(Tried to think of the best way not to 'guide it' to an answer to trigger its 'please the user' direction. So leave it ambiguous on if I agree or disagree)
Quote:
Is this true, does AI reason? in the way we understand 'reasoning' to mean? Does it 'understand' what we are typing to it, or it types?
Someone said: It understands it well enough to reason and to give an output which is correct (most of the time anyways), this without it being in the training data.


Grok:
No. AI does not reason or understand as humans do—it statistically predicts the next tokens based on patterns in its training data, even when generalizing to novel inputs. The output can appear correct and "reasoned," but there is no genuine comprehension or reasoning process.

ChatGPT:
It does not have human-style understanding, reasoning, or grasps of concepts. It learns patterns from vast data and can use those patterns to make useful inferences, solve novel problems, and explain steps. AI can often produce output that seemingly appears as reasoning — But it is a function of the result appearance, not the process.

Claude:
Not as such. It's sophisticated pattern matching over a vast statistical space. The outputs look like reasoning because the training data contains a lot of reasoning, and the model has learned the shape of it. There's no "grasping" of meaning — just very good prediction. Observable behavior labels using human context for reference, and the process which creates it do not align.

Edit: Follow up question for it as 'model has learned the shape of it' lacked context of what it meant, answer:
- Getting really good at predicting "what comes next in a reasoning-shaped piece of text" — without ever doing what humans do when they reason. During training, the model is shown billions of fragments of text and asked, over and over, "given this context, what's the next token?" That's the entire job.

Who better to ask, than the thing being asked about?
If the AI that trained itself, in a black box we don't know how it works, says it doesn't. Then we have no chance of a better reply. We are just throwing darts at a dart board in a dark room compared to it.


-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf

Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.


Last edited by DXWarlock on Sun, 10th May 2026 02:28; edited 1 time in total
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 02:26    Post subject:
"it does not have human-style understanding, reasoning," so exactly what i have been saying.

but yes, in its simplest form that is 100% what it is. And if you want to dig more deep into it;

"An LLM cannot generate:

supernatural truths
guaranteed factual discoveries from nowhere

But that does NOT mean:

no novelty
no abstraction
no recombination
no new strategies
no new ideas

Humans also learn from prior data:

language
culture
books
observation
imitation

Human creativity is heavily recombinational too.

The real question is:

can recombination plus abstraction produce outputs humans perceive as genuinely novel?

The answer is clearly yes.

Examples:

new code
new melodies
new metaphors
new engineering combinations
novel game strategies

Even evolution works this way:

mutation
recombination
selection

No external “novelty source” is injected.

4. LLMs can produce outputs never seen in training

This is mathematically unavoidable in large combinatorial spaces.

A model trained on text can generate:

entirely new sentences
new codebases
new designs
new jokes
new arguments

because the output space is astronomically larger than the dataset.

The important distinction:

novelty ≠ independence from training
novelty ≠ magic creation ex nihilo

A jazz improviser is constrained by prior learning too."
Back to top
DXWarlock
VIP Member



Posts: 11739
Location: Florida, USA
PostPosted: Sun, 10th May 2026 02:33    Post subject:
The more I ask it, the more it insists it doesnt reason at all. Like humans or not.

Short reply it gave:
it is not reasoning, and not with anything like human understanding or experience attached. A system that gets extraordinarily good at predicting plausible continuations will, as a byproduct, produce enormous amounts of correct-looking reasoning to a person. But the right answer is simply usually the most predictable mathematically probable next sequence.
(forgot a line)
Only: "This looks like a context — where confident text of approximately this form belongs."

Seems the new dogma to explain the unexplained: AI of the gaps Smile

Im sure if you ask it in the 'right' way, like "Explain how you reason" or "Doesn't AI reason" or such. you can get it to answer yes, nudging it to please you. AI is reaaaallly good at confirming the user and pleasing them, to get them coming back to that AI. (HUGE articles on this problem being baked into them)


-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf

Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.


Last edited by DXWarlock on Sun, 10th May 2026 02:37; edited 1 time in total
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 02:37    Post subject:
my previous quote is also of importance

"An LLM fundamentally predicts the next token. That is the training objective. But the important question is: what internal machinery emerges from optimizing that objective at scale". This is where its interesting, how we don't know everything about it or how it works.

i call it "reasoning" but that doesn't mean i think its human reasoning. we are talking about an AI after all. AI's will not say they have human reasoning, because they don't.
Back to top
DXWarlock
VIP Member



Posts: 11739
Location: Florida, USA
PostPosted: Sun, 10th May 2026 02:39    Post subject:
Until they can show me what's in that 'black box" I don't call it anything. I cannot apply attributes to a thing simply because it looks like it has them to me with no knowledge of how it is, what it is..
Otherwise if I never saw an airplane, I would assume they are a far advanced bird species. God forbid it be a cargo plane, otherwise I would conclude "It must eat a LOT, look at the size of the anus it opens in the back", Attributing it eats, to having a door that opens roughly where a bird shits from.


-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf

Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.


Last edited by DXWarlock on Sun, 10th May 2026 02:40; edited 1 time in total
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 02:40    Post subject:
Fair enough.
Back to top
Il_Padrino




Posts: 7938
Location: Greece by the North Sea
PostPosted: Sun, 10th May 2026 10:02    Post subject:
vurt wrote:


I did responded to this:

"by definition can never be 100% accurate unless your input matches its training data 100%."

my reply was

"Not true at all, that would make them pretty much useless.
LLMs can produce correct outputs on inputs never seen in training. Generalization is the entire point of learning."

You never responded.


We absolutely know how the brain works but only partially. For artificial neural networks, we do know the implementation, but we do not fully understand the emergent internal behavior.

For the human brain, neuron behavior for example, we know quite a few things about it and how to mimic that.

You are 100% wrong with "Nothing new can emerge from LLM, because they were trained via backpropagation. In other words, starting from the solution going back to the question."

This is 100% empirically proven with AI's and how much of the output isnt hand-programmed or in the training data, at all. It emerges from understanding the world, from actual reasoning.

If you want to know the exact specifics you can ask an AI and it can give a long explanation of it. You are clearly not educated enough in LLM's.
Emergence isn't the magic you probably think it is either. It can be observed with weather, ant colonies, protein folding + more.


When you draw a line on a graph between two known data points, you'd call the data on that line 'emergent' too. It's just statistics. The more data points, the more accurate the results when feeding it when other data.

And that is not even remotely close to how our brain works


The numbers don't decide, the system is a lie. The river running dry, the wings of butterflies.
And you may pour us away like soup. Like we're pretty broken flowers.
We'll take back what is ours. One day at a time.
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 12:32    Post subject:
It's an oversimplification.

Here's is an AI breaking it down for you (you can just read the last part if you want + the Large font text.)

 Spoiler:
 


This part in particular;

The similarity is not: “transformers are brains”
The similarity is:
both are large adaptive systems that learn internal representations from data.
Back to top
LeoNatan
☢ NFOHump Despot ☢



Posts: 74853
Location: Israel
PostPosted: Sun, 10th May 2026 13:58    Post subject:
In this thread, two chatbots going at it. Laughing No intelligence to be seen anywhere Cool Face


My IMDb Ratings | Fix NFOHump Cookies | Hide Users / Threads | Embedded Content (Videos/GIFs/Twitter/Reddit) | The Derps Collection

“Don't cry because it's over. Smile because it happened.”
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 14:38    Post subject:
The positive thing with that is that it would be very easy for you to come up with some great arguments then, no?

I just see your usual seething and coping. I guess you aren't as smart as an LLM.
Back to top
DXWarlock
VIP Member



Posts: 11739
Location: Florida, USA
PostPosted: Sun, 10th May 2026 18:19    Post subject:
LeoNatan wrote:
In this thread, two chatbots going at it. Laughing No intelligence to be seen anywhere Cool Face

Great, thanks...
It went from two chatbots. To now we have two chatbots and a spambot. Good job.


-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf

Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.
Back to top
DXWarlock
VIP Member



Posts: 11739
Location: Florida, USA
PostPosted: Sun, 10th May 2026 18:28    Post subject:
@vurt
Those are compelling posts.

BUT curious, care to share how you are asking it these things? What is the phrasing?
They are people pleasers [gotta keep those user subs baby] and I am curious if you are asking it, even without intending to, to please you with an answer (no innuendo there Smile )
Anytime I need it to do anything, I have learned to be super careful about giving it even the slightest whiff of a stance, answer, or inclination I have on it, or it will lean on that so hard trying to match me, even if just as a "No, But [pandering bullshit]" (random bullshit so I don't feel bad it corrected me)

edited: Listed two backwards.
Chat GPT is by FAR the worst offender it might as well give me a reach around while I ask. Claude is next. And Grok isn't without its simping. [Only 3 I use, each is better at certain things I need]


Like for example would you ask:
Do LLM think and reason?
Or
Explain how LLM can think and reason?

Rough short examples, and maybe a little TOO upfront for what I mean. But hope you get the idea.

But the second one it will bend over backwards to try to give you an answer that in some way complies with the question. Even if it tells you they don't it will toss in some nugget of "feel good' for you validating your apparent stance with words to basically say "They do not reason, but it may in some abstract way we do not know yet" or something.

I am suspect it was accidently lead by how you asked, without you knowing you did. the way it uses stuff like: “partly true,” “however,” “directionally correct”.
Not sure if its agreeing with me or you as I don't know the question asked. But its "Yes man"ing someone with those type of replies.
And looks like a Claude reply? It feels a lot like Claude's "Here is an answer, but don't be mad if I say different, [you/they/whoever depending on the framing of the question] could be right in some abstract way" way of answering accidental loaded questions.


-We don't control what happens to us in life, but we control how we respond to what happens in life.
-Hard times create strong men, strong men create good times, good times create weak men, and weak men create hard times. -G. Michael Hopf

Disclaimer: Post made by me are of my own creation. A delusional mind relayed in text form.


Last edited by DXWarlock on Sun, 10th May 2026 19:02; edited 6 times in total
Back to top
Iwasfaggotonce




Posts: 755

PostPosted: Sun, 10th May 2026 18:45    Post subject:
Sam Altman said they need 7 trillion to finish building AGI Laughing (cough cough message in IPO context cough cough)
Back to top
zenux




Posts: 2852
Location: lɘɒɿƨI
PostPosted: Sun, 10th May 2026 21:16    Post subject:
Say goodby to your 3 fingers in front of the face deepfake test.
Quote:

A realtime deepfake software package called Haotian AI, obtained and tested by 404 Media reporter Joseph Cox, is being sold to operators inside Southeast Asian scam compounds, allowing users to replace their face in live video calls across WhatsApp, Microsoft Teams, Zoom, TikTok, Instagram, and YouTube. The investigation, published this week, marks the first time a journalist has directly tested the tool.

Haotian AI operates as a virtual camera that overlays a swapped face in real time, preserving lighting, hand-over-face occlusion, and facial expression movement. The software is marketed in Chinese, priced at $1,998 per year with an additional $498 per custom face model, and payment is accepted in Tether (TRC20) on the TRON network. Hardware requirements include an Nvidia RTX 4080 SUPER GPU, and the vendor offers physical installation services in parts of Cambodia. Cybercrime NGO Chong Lua Dao, which includes former hacker Hieu Minh Ngo, told 404 Media the operation is based in Cambodia, with video evidence of staff installing the software in an office building in Phnom Penh.

https://idtechwire.com/investigation-exposes-realtime-deepfake-tool-sold-to-scam-compounds/

https://www.instagram.com/reels/DYDwiaMJrM4/

I know, I know deepfake software is not AI
Back to top
LeoNatan
☢ NFOHump Despot ☢



Posts: 74853
Location: Israel
PostPosted: Sun, 10th May 2026 22:02    Post subject:
vurt wrote:
The positive thing with that is that it would be very easy for you to come up with some great arguments then, no?

I just see your usual seething and coping. I guess you aren't as smart as an LLM.

"Never associate with idiots on their own level, because, being an intelligent man, you'll try to deal with them on their level—and on their level they'll beat you every time." Laughing


My IMDb Ratings | Fix NFOHump Cookies | Hide Users / Threads | Embedded Content (Videos/GIFs/Twitter/Reddit) | The Derps Collection

“Don't cry because it's over. Smile because it happened.”
Back to top
vurt




Posts: 14262
Location: Sweden
PostPosted: Sun, 10th May 2026 22:12    Post subject:
basically just asked what it thinks about the argument, pasted the argument.

as for confidence level, that is the first thing i add to my settings for any model i use, i also ask it to self-check itself so its lying and hallucinating less. i think settings is the most important part to setup before you ask an AI anything.

DXWarlock wrote:
@vurt
Those are compelling posts.

BUT curious, care to share how you are asking it these things? What is the phrasing?
They are people pleasers [gotta keep those user subs baby] and I am curious if you are asking it, even without intending to, to please you with an answer (no innuendo there Smile )
Anytime I need it to do anything, I have learned to be super careful about giving it even the slightest whiff of a stance, answer, or inclination I have on it, or it will lean on that so hard trying to match me, even if just as a "No, But [pandering bullshit]" (random bullshit so I don't feel bad it corrected me)

edited: Listed two backwards.
Chat GPT is by FAR the worst offender it might as well give me a reach around while I ask. Claude is next. And Grok isn't without its simping. [Only 3 I use, each is better at certain things I need]


Like for example would you ask:
Do LLM think and reason?
Or
Explain how LLM can think and reason?

Rough short examples, and maybe a little TOO upfront for what I mean. But hope you get the idea.

But the second one it will bend over backwards to try to give you an answer that in some way complies with the question. Even if it tells you they don't it will toss in some nugget of "feel good' for you validating your apparent stance with words to basically say "They do not reason, but it may in some abstract way we do not know yet" or something.

I am suspect it was accidently lead by how you asked, without you knowing you did. the way it uses stuff like: “partly true,” “however,” “directionally correct”.
Not sure if its agreeing with me or you as I don't know the question asked. But its "Yes man"ing someone with those type of replies.
And looks like a Claude reply? It feels a lot like Claude's "Here is an answer, but don't be mad if I say different, [you/they/whoever depending on the framing of the question] could be right in some abstract way" way of answering accidental loaded questions.


Last edited by vurt on Sun, 10th May 2026 22:16; edited 2 times in total
Back to top
Page 38 of 39 All times are GMT + 1 Hour
NFOHump.com Forum Index - General chatter Goto page Previous  1, 2, 3 ... , 37, 38, 39  Next
Signature/Avatar nuking: none (can be changed in your profile)  


Display posts from previous:   

Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB 2.0.8 © 2001, 2002 phpBB Group