Discussion:
apple intelligence not so intelligent
Add Reply
badgolferman
2025-01-16 16:02:46 UTC
Reply
Permalink
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.

For over a month, roughly as long as the feature has been available to
iPhone users, publishers have found that it consistently generates
false information and pushes it to millions of users.

Despite broadcasting a barrage of fabrications for weeks, Apple has yet
to meaningfully address the problem.

"This is my periodic rant that Apple Intelligence is so bad that today
it got every fact wrong in its AI a summary of Washington Post news
alerts," the newspaper's tech columnist Geoffrey Fowler wrote in a post
on Bluesky this week.

Fowler appended a screenshot of an alert, which claimed that Pete
Hegseth, who's been facing a confrontational confirmation hearing for
the role of defense secretary this week, had been fired by his former
employer, Fox News — which is false and not what the WaPo's syndication
of an Associated Press story actually said. The AI alert also claimed
that Florida senator Marco Cubio had been sworn in as secretary of
state, which is also false as of the time of writing.

"It's wildly irresponsible that Apple doesn't turn off summaries for
news apps until it gets a bit better at this AI thing," Fowler added.

The constant blunders of Apple's AI summaries put the tech's nagging
shortcomings on full display, demonstrating that even tech giants like
Apple are failing miserably to successfully integrate AI without
constantly embarrassing themselves.

AI models are still coming up with all sorts of "hallucinated" lies, a
problem experts believe could be intrinsic to the tech. After all,
large language models like the one powering Apple's summarizing feature
simply predict the next word based on probability and are incapable of
actually understanding the content they're paraphrasing, at least for
the time being.

And the stakes are high, given the context. Apple's notifications are
intended to alert iPhone users to breaking news — not sow distrust and
confusion.

The story also highlights a stark power imbalance, with news
organizations powerless to determine how Apple represents their work to
its vast number of users.

"News organizations have vigorously complained to Apple about this, but
we have no power over what iOS does to the accurate and expertly
crafted alerts we send out," Fowler wrote in a followup.

In December, the BBC first filed a complaint with Apple after the
feature mistakenly claimed that Luigi Mangione, the man who killed
UnitedHealthcare CEO Brian Thompson, had shot himself — an egregious
and easily disproven fabrication.

Last week, Apple finally caved and responded to the complaint, vowing
to add a clarifying disclaimer that the summaries were AI-generated
while also attempting to distance itself from bearing any
responsibility.

"Apple Intelligence features are in beta and we are continuously making
improvements with the help of user feedback," a company spokesperson
told the BBC in a statement. "A software update in the coming weeks
will further clarify when the text being displayed is summarization
provided by Apple Intelligence."

"We encourage users to report a concern if they view an unexpected
notification summary," the company continued.

The disclaimer unintentionally points to the dubious value proposition
of today's AI: what's the point of a summarizing feature if the company
is forced to include a disclaimer on each one that it might be entirely
wrong? Should Apple's customers really be the ones responsible for
pointing out each time its AI summaries are spreading lies?

"It just transfers the responsibility to users, who — in an already
confusing information landscape — will be expected to check if
information is true or not," Reporters Without Borders technology and
journalism desk head Vincent Berthier told the BBC.

Journalists are particularly worried about further eroding trust in the
news industry, a pertinent topic given the tidal wave of AI slop that
has been crashing over the internet.

"At a time where access to accurate reporting has never been more
important, the public must not be placed in a position of
second-guessing the accuracy of news they receive," the National Union
of Journalists general secretary Laura Davison told the BBC.

https://futurism.com/apple-ai-butchering-news-summaries
--
"If you don't read the newspapers you are uninformed; if you do read
the newspapers you are misinformed." ~ Mark Twain
Alan
2025-01-16 19:47:22 UTC
Reply
Permalink
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
They're all "underbaked" right now, Sunshine.

This is not news to anyone who has actually been paying attention.
badgolferman
2025-01-17 01:42:50 UTC
Reply
Permalink
Post by Alan
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
They're all "underbaked" right now, Sunshine.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is
not good enough but still pushes it out to users. They should not be using
their customers as unwilling beta testers.
Paul Goodman
2025-01-17 02:38:15 UTC
Reply
Permalink
Post by badgolferman
Post by Alan
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
They're all "underbaked" right now, Sunshine.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is
not good enough but still pushes it out to users. They should not be using
their customers as unwilling beta testers.
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not
forced on you.
--
Paul Goodman
badgolferman
2025-01-17 02:45:42 UTC
Reply
Permalink
Post by Paul Goodman
Post by badgolferman
Post by Alan
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
They're all "underbaked" right now, Sunshine.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is
not good enough but still pushes it out to users. They should not be using
their customers as unwilling beta testers.
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not
forced on you.
Users are being fed bad data by Apple Intelligence and the company knows
it. Why don’t they make it more mature before rolling it out?
Gelato
2025-01-17 04:42:27 UTC
Reply
Permalink
Post by badgolferman
Post by Paul Goodman
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not
forced on you.
Users are being fed bad data by Apple Intelligence and the company knows
it. Why don¢t they make it more mature before rolling it out?
It's kind of like Apple Maps was in the early years where Apple felt
pressured to produce something due to the dominance of competitive product.
Alan
2025-01-17 17:43:28 UTC
Reply
Permalink
Post by badgolferman
Post by Paul Goodman
Post by badgolferman
Post by Alan
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
They're all "underbaked" right now, Sunshine.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is
not good enough but still pushes it out to users. They should not be using
their customers as unwilling beta testers.
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not
forced on you.
Users are being fed bad data by Apple Intelligence and the company knows
it. Why don’t they make it more mature before rolling it out?
Why don't you acknowledge you were wrong before you go on talking about
this?
Your Name
2025-01-17 05:28:35 UTC
Reply
Permalink
Post by Paul Goodman
Post by badgolferman
Post by Alan
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
They're all "underbaked" right now, Sunshine.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is
not good enough but still pushes it out to users. They should not be using
their customers as unwilling beta testers.
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not
forced on you.
Apple has reportedly shut down the AI new summaries while it figures
out how to make it less error-ridden ... good luck with that! AI is all
utter crap best thrown in the bin.
Chris
2025-01-17 15:07:12 UTC
Reply
Permalink
Post by Your Name
Post by Paul Goodman
Post by badgolferman
Post by Alan
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
They're all "underbaked" right now, Sunshine.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is
not good enough but still pushes it out to users. They should not be using
their customers as unwilling beta testers.
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not
forced on you.
Apple has reportedly
Reported where?
Post by Your Name
shut down the AI new summaries while it figures
out how to make it less error-ridden ... good luck with that! AI is all
utter crap best thrown in the bin.
The only report I've seen is from the BBC where Apple has acknowledged the
problem and will be addressing it in a future update.
Chris
2025-01-17 15:19:35 UTC
Reply
Permalink
Post by Chris
Post by Your Name
Post by Paul Goodman
Post by badgolferman
Post by Alan
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
They're all "underbaked" right now, Sunshine.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is
not good enough but still pushes it out to users. They should not be using
their customers as unwilling beta testers.
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not
forced on you.
Apple has reportedly
Reported where?
OK. I see it now, from a few hours ago.
https://www.theguardian.com/technology/2025/jan/17/apple-suspends-ai-generated-news-alert-service-after-bbc-complaint

Given it was such fresh news, a link would have been helpful, YourName.
Your Name
2025-01-16 21:15:22 UTC
Reply
Permalink
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>

So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
Alan
2025-01-16 21:20:41 UTC
Reply
Permalink
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
I don't agree with the last part.

This is the early-excitement-turning-to-disillusionment phase of AI.

It clearly will become better and more important.
Rick
2025-01-16 22:18:10 UTC
Reply
Permalink
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed. I
agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Your Name
2025-01-16 23:55:42 UTC
Reply
Permalink
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.

When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...

The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!!
:-(
Alan
2025-01-17 00:00:30 UTC
Reply
Permalink
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-
hyped crap that they are. Hopefully it will be just another quickly
gone tech fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
Never say "never".

Evolutionary algorithms are already a thing, and they're only going to
get better.
Post by Your Name
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
That doesn't speak to its utility.
Post by Your Name
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!! :-(
I agree with this entirely.

I just had a conversation yesterday with a client and friend of mine
who's an MD, and he was interested in starting to use AI to assist in
diagnosing patients. I stressed to him that, while it could be a useful
tool in theory, it cannot at least for now (and probably for quite a
while) be trusted with anything critical. The trained professional was
very much still the one who needs to be in charge.
Rick
2025-01-17 00:48:49 UTC
Reply
Permalink
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless,
over-hyped crap that they are. Hopefully it will be just another
quickly gone tech fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
Post by Your Name
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!! :-(
Your Name
2025-01-17 05:38:46 UTC
Reply
Permalink
Post by Rick
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
Post by Rick
Post by Your Name
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!! :-(
John Hill
2025-01-17 08:23:53 UTC
Reply
Permalink
Post by Your Name
Post by Rick
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
And not only the tech companies. The Labour government has announced a massive
programme to explore, enhance and encourage the use of AI.
Me, I'm with the sceptics. I've turned it off on my iMac and my mobile devices
are to old to engage with it.
Remember the dot com bubble of the early 2000s? Marconi went for it and went
bust as a result. Probably others less well established ones too.
Post by Your Name
Post by Rick
Post by Your Name
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!! :-(
--
An infinitely complex system can fail in an infinite number of ways.
badgolferman
2025-01-17 12:55:22 UTC
Reply
Permalink
Post by Your Name
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
--
"The right to be heard does not automatically include the right to be
taken seriously." ~ Humbert H. Humphrey
Johnny LaRue
2025-01-17 14:34:14 UTC
Reply
Permalink
On Jan 17, 2025 at 7:55:22 AM EST, ""badgolferman""
Post by badgolferman
Post by Your Name
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
They won't listen anyway. They need to lose billions of dollars before they
will understand.

Again. Remember "smart speakers"? Remember "virtual reality"? Both were once
"the next big thing".

Both are now just quaint memories.
Chris
2025-01-17 15:05:59 UTC
Reply
Permalink
Post by Johnny LaRue
On Jan 17, 2025 at 7:55:22 AM EST, ""badgolferman""
Post by badgolferman
Post by Your Name
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
They won't listen anyway. They need to lose billions of dollars before they
will understand.
Again. Remember "smart speakers"? Remember "virtual reality"? Both were once
"the next big thing".
Both are now just quaint memories.
Smart speakers are ubiquitous.
Your Name
2025-01-17 21:16:49 UTC
Reply
Permalink
Post by Johnny LaRue
On Jan 17, 2025 at 7:55:22 AM EST, ""badgolferman""
Post by badgolferman
Post by Your Name
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
They won't listen anyway. They need to lose billions of dollars before they
will understand.
Many of them in big business management won't listen even when they do
lose billions.

There's a company here that was recently declared bankrupt and had lots
hundreds of thousands of dollars, but the CEO who owned the majority of
the stock bought up all the rest of the stock and "re-opened" the same
busines under the same name. Either he's an idiot or it the bankruptcy
was a scam to get out of paying all the debt ... or more likely both.
Post by Johnny LaRue
Again. Remember "smart speakers"? Remember "virtual reality"? Both were once
"the next big thing".
Both are now just quaint memories.
Yep, there's lots and lots of those failed fads and ones that are
nowhere near as popular as they were hyped to become.

Smart Glasses / AR glasses (as oposed to full masks) is another one.
The fools in the tech companies thought people would want to wear
glasses ... and yet for decades most people who need eyesight
correction having been moving away from glasses to contact lenses. :-\

Apple's Touch Bar on their laptops was announced with a bang, and
quickly went out with barely a whimper.

The tech geeks and nerds might be interested in all this gimmickry, but
the vast majority of the general public don't want it and won't use it
(most don't even know it exists). Same happens in cars with all their
modern gimmickry that is barely used by anyone.
Your Name
2025-01-18 00:29:01 UTC
Reply
Permalink
Post by Your Name
Post by Johnny LaRue
On Jan 17, 2025 at 7:55:22 AM EST, ""badgolferman""
Post by badgolferman
Post by Your Name
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
They won't listen anyway. They need to lose billions of dollars before they
will understand.
Many of them in big business management won't listen even when they do
lose billions.
There's a company here that was recently declared bankrupt and had lots
hundreds of thousands of dollars,
Damn typos!! That was meant to say "had LOST hundreds of thousands of
dollars". :-\
Post by Your Name
but the CEO who owned the majority of the stock bought up all the rest
of the stock and "re-opened" the same busines under the same name.
Either he's an idiot or it the bankruptcy was a scam to get out of
paying all the debt ... or more likely both.
Marion
2025-01-17 18:03:01 UTC
Reply
Permalink
Post by badgolferman
Post by Your Name
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
Had nospam's contract not expired, he would have definitively claimed AI is
"not needed" "not wanted"
simply because Apple doesn't have it while everyone else already does.
Alan
2025-01-17 18:28:46 UTC
Reply
Permalink
Post by Marion
Post by Your Name
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies.  Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
Had nospam's contract not expired, he would have definitively claimed AI is
  "not needed"        "not wanted"
simply because Apple doesn't have it while everyone else already does.
'Happy New Year! Let's all be helpful together, as a well-honed team.
Working together...'
Your Name
2025-01-17 21:07:46 UTC
Reply
Permalink
Post by badgolferman
Post by Your Name
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
You just have to look at history to see all the fads "big business"
companies jump onto the bandwagon of - some succeeded, many failed.

Silly AI may not fail completely, but it's certainly never going to
live up to all the hype fools are putting on it and most people
couldn't give a damn about it let alone actually want to use it. It's
simply another fad trying to get foolish people to part with their
money upgrading to the latest toys.
Chris
2025-01-17 15:06:00 UTC
Reply
Permalink
Post by Your Name
Post by Rick
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
I see that you're on fence about it all... ;)
Rick
2025-01-17 15:21:22 UTC
Reply
Permalink
Post by Your Name
Post by Rick
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all
be avoided by any sane person because they're all just useless,
over-hyped crap that they are. Hopefully it will be just another
quickly gone tech fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what
it has been programmed to do. It does NOT "learn" and it does NOT
"think", and it can never do either of those things.
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on
that list), but from the limited testing I have done, I think
certain more mature LLM products like ChatGPT and Copilot are
actually quite useful when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
It's no more "massively flawed" than any other software or programming -
because that's all AI is. It is just software - written by and for
humans - and it is up to each person to assess how much if any benefit
to derive from it. You've assessed that it's useless to you and that's
fine. Many people do derive perceived benefits from it, and that's fine
for them. And you can call them "idiot" tech companies if you want, but
that just sounds like envy, as many of those companies are doing quite
well and will likely continue to do so. IF AI is just a fad, as you
suggest, it will ultimately fizzle out, but as someone with more than 40
years of IT experience, I actually see it as the natural and normal
evolution of software development.
Post by Your Name
Post by Rick
Post by Your Name
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!! :-(
Your Name
2025-01-17 21:21:15 UTC
Reply
Permalink
Post by Rick
Post by Your Name
Post by Rick
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
It's no more "massively flawed" than any other software or programming
- because that's all AI is. It is just software - written by and for
humans - and it is up to each person to assess how much if any benefit
to derive from it. You've assessed that it's useless to you and that's
fine. Many people do derive perceived benefits from it, and that's
fine for them. And you can call them "idiot" tech companies if you
want, but that just sounds like envy, as many of those companies are
doing quite well and will likely continue to do so. IF AI is just a
fad, as you suggest, it will ultimately fizzle out, but as someone with
more than 40 years of IT experience, I actually see it as the natural
and normal evolution of software development.
The companies jumping on the bandwagon to include AI in everything were
already successful before the AI fad and most will continue to be
successful after it hopefully becomes a footnote in history (alongside
self-driving cars).

The companies that create the AI software are currently successful
simply because all the other companies are stupidly jumping on the
bandwagon, but as has been seen numerous times before, that bubble can
quickly burst.
Alan
2025-01-17 17:44:42 UTC
Reply
Permalink
Post by Your Name
Post by Rick
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all
be avoided by any sane person because they're all just useless,
over-hyped crap that they are. Hopefully it will be just another
quickly gone tech fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what
it has been programmed to do. It does NOT "learn" and it does NOT
"think", and it can never do either of those things.
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on
that list), but from the limited testing I have done, I think
certain more mature LLM products like ChatGPT and Copilot are
actually quite useful when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
So said every nay-sayer about every new idea ever.
Chris
2025-01-17 15:05:55 UTC
Reply
Permalink
Post by Your Name
Post by Rick
Post by Your Name
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
<snip>
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do.
I disagree. There are many features and outputs of AI that aren't or
weren't predictable based on the inputs you give them. Just look at some of
the errors that have happened with driverless cars.
Post by Your Name
It does NOT "learn"
It definitely does.
Post by Your Name
and it does NOT "think",
and it can never do either of those things.
I used to believe that too. Not anymore.

It'll be a long time before an AI will compete with humans, but elementary
thinking won't be that far away I suspect.
Post by Your Name
Post by Rick
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
That's old news. The cutting edge is almost indistinguishable. You need a
trained (human) eye to spot the very best.
Post by Your Name
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!!
:-(
Properly trained, and importantly properly used, ML/AI systems can be very
useful.

Researchers have to keep being reminded to not believe the hype, but focus
on the evidence.
Colour Sergeant Bourne
2025-01-18 18:57:53 UTC
Reply
Permalink
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
For over a month, roughly as long as the feature has been available to
iPhone users, publishers have found that it consistently generates
false information and pushes it to millions of users.
Despite broadcasting a barrage of fabrications for weeks, Apple has yet
to meaningfully address the problem.
"This is my periodic rant that Apple Intelligence is so bad that today
it got every fact wrong in its AI a summary of Washington Post news
alerts," the newspaper's tech columnist Geoffrey Fowler wrote in a post
on Bluesky this week.
Fowler appended a screenshot of an alert, which claimed that Pete
Hegseth, who's been facing a confrontational confirmation hearing for
the role of defense secretary this week, had been fired by his former
employer, Fox News — which is false and not what the WaPo's syndication
of an Associated Press story actually said. The AI alert also claimed
that Florida senator Marco Cubio had been sworn in as secretary of
state, which is also false as of the time of writing.
"It's wildly irresponsible that Apple doesn't turn off summaries for
news apps until it gets a bit better at this AI thing," Fowler added.
The constant blunders of Apple's AI summaries put the tech's nagging
shortcomings on full display, demonstrating that even tech giants like
Apple are failing miserably to successfully integrate AI without
constantly embarrassing themselves.
AI models are still coming up with all sorts of "hallucinated" lies, a
problem experts believe could be intrinsic to the tech. After all,
large language models like the one powering Apple's summarizing feature
simply predict the next word based on probability and are incapable of
actually understanding the content they're paraphrasing, at least for
the time being.
And the stakes are high, given the context. Apple's notifications are
intended to alert iPhone users to breaking news — not sow distrust and
confusion.
The story also highlights a stark power imbalance, with news
organizations powerless to determine how Apple represents their work to
its vast number of users.
"News organizations have vigorously complained to Apple about this, but
we have no power over what iOS does to the accurate and expertly
crafted alerts we send out," Fowler wrote in a followup.
In December, the BBC first filed a complaint with Apple after the
feature mistakenly claimed that Luigi Mangione, the man who killed
UnitedHealthcare CEO Brian Thompson, had shot himself — an egregious
and easily disproven fabrication.
Last week, Apple finally caved and responded to the complaint, vowing
to add a clarifying disclaimer that the summaries were AI-generated
while also attempting to distance itself from bearing any
responsibility.
"Apple Intelligence features are in beta and we are continuously making
improvements with the help of user feedback," a company spokesperson
told the BBC in a statement. "A software update in the coming weeks
will further clarify when the text being displayed is summarization
provided by Apple Intelligence."
"We encourage users to report a concern if they view an unexpected
notification summary," the company continued.
The disclaimer unintentionally points to the dubious value proposition
of today's AI: what's the point of a summarizing feature if the company
is forced to include a disclaimer on each one that it might be entirely
wrong? Should Apple's customers really be the ones responsible for
pointing out each time its AI summaries are spreading lies?
"It just transfers the responsibility to users, who — in an already
confusing information landscape — will be expected to check if
information is true or not," Reporters Without Borders technology and
journalism desk head Vincent Berthier told the BBC.
Journalists are particularly worried about further eroding trust in the
news industry, a pertinent topic given the tidal wave of AI slop that
has been crashing over the internet.
"At a time where access to accurate reporting has never been more
important, the public must not be placed in a position of
second-guessing the accuracy of news they receive," the National Union
of Journalists general secretary Laura Davison told the BBC.
https://futurism.com/apple-ai-butchering-news-summaries
Maybe so...but it's for the children and the environment :-)
--
The forest was shrinking but the trees kept voting for the axe, for the
axe was clever and convinced the trees that because his handle was made
of wood, he was one of them
Your Name
2025-01-18 20:51:47 UTC
Reply
Permalink
Post by Colour Sergeant Bourne
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
For over a month, roughly as long as the feature has been available to
iPhone users, publishers have found that it consistently generates
false information and pushes it to millions of users.
Despite broadcasting a barrage of fabrications for weeks, Apple has yet
to meaningfully address the problem.
"This is my periodic rant that Apple Intelligence is so bad that today
it got every fact wrong in its AI a summary of Washington Post news
alerts," the newspaper's tech columnist Geoffrey Fowler wrote in a post
on Bluesky this week.
Fowler appended a screenshot of an alert, which claimed that Pete
Hegseth, who's been facing a confrontational confirmation hearing for
the role of defense secretary this week, had been fired by his former
employer, Fox News — which is false and not what the WaPo's syndication
of an Associated Press story actually said. The AI alert also claimed
that Florida senator Marco Cubio had been sworn in as secretary of
state, which is also false as of the time of writing.
"It's wildly irresponsible that Apple doesn't turn off summaries for
news apps until it gets a bit better at this AI thing," Fowler added.
The constant blunders of Apple's AI summaries put the tech's nagging
shortcomings on full display, demonstrating that even tech giants like
Apple are failing miserably to successfully integrate AI without
constantly embarrassing themselves.
AI models are still coming up with all sorts of "hallucinated" lies, a
problem experts believe could be intrinsic to the tech. After all,
large language models like the one powering Apple's summarizing feature
simply predict the next word based on probability and are incapable of
actually understanding the content they're paraphrasing, at least for
the time being.
And the stakes are high, given the context. Apple's notifications are
intended to alert iPhone users to breaking news — not sow distrust and
confusion.
The story also highlights a stark power imbalance, with news
organizations powerless to determine how Apple represents their work to
its vast number of users.
"News organizations have vigorously complained to Apple about this, but
we have no power over what iOS does to the accurate and expertly
crafted alerts we send out," Fowler wrote in a followup.
In December, the BBC first filed a complaint with Apple after the
feature mistakenly claimed that Luigi Mangione, the man who killed
UnitedHealthcare CEO Brian Thompson, had shot himself — an egregious
and easily disproven fabrication.
Last week, Apple finally caved and responded to the complaint, vowing
to add a clarifying disclaimer that the summaries were AI-generated
while also attempting to distance itself from bearing any
responsibility.
"Apple Intelligence features are in beta and we are continuously making
improvements with the help of user feedback," a company spokesperson
told the BBC in a statement. "A software update in the coming weeks
will further clarify when the text being displayed is summarization
provided by Apple Intelligence."
"We encourage users to report a concern if they view an unexpected
notification summary," the company continued.
The disclaimer unintentionally points to the dubious value proposition
of today's AI: what's the point of a summarizing feature if the company
is forced to include a disclaimer on each one that it might be entirely
wrong? Should Apple's customers really be the ones responsible for
pointing out each time its AI summaries are spreading lies?
"It just transfers the responsibility to users, who — in an already
confusing information landscape — will be expected to check if
information is true or not," Reporters Without Borders technology and
journalism desk head Vincent Berthier told the BBC.
Journalists are particularly worried about further eroding trust in the
news industry, a pertinent topic given the tidal wave of AI slop that
has been crashing over the internet.
"At a time where access to accurate reporting has never been more
important, the public must not be placed in a position of
second-guessing the accuracy of news they receive," the National Union
of Journalists general secretary Laura Davison told the BBC.
https://futurism.com/apple-ai-butchering-news-summaries
Maybe so...but it's for the children and the environment :-)
According to a recent report, all the servers, cooling, etc. needed for
this silly AI fad will be worse for the environment than all of the
cars driving around in California. :-\
Alan
2025-01-19 00:12:51 UTC
Reply
Permalink
Post by Your Name
Post by Colour Sergeant Bourne
Post by badgolferman
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
For over a month, roughly as long as the feature has been available to
iPhone users, publishers have found that it consistently generates
false information and pushes it to millions of users.
Despite broadcasting a barrage of fabrications for weeks, Apple has yet
to meaningfully address the problem.
"This is my periodic rant that Apple Intelligence is so bad that today
it got every fact wrong in its AI a summary of Washington Post news
alerts," the newspaper's tech columnist Geoffrey Fowler wrote in a post
on Bluesky this week.
Fowler appended a screenshot of an alert, which claimed that Pete
Hegseth, who's been facing a confrontational confirmation hearing for
the role of defense secretary this week, had been fired by his former
employer, Fox News — which is false and not what the WaPo's syndication
of an Associated Press story actually said. The AI alert also claimed
that Florida senator Marco Cubio had been sworn in as secretary of
state, which is also false as of the time of writing.
"It's wildly irresponsible that Apple doesn't turn off summaries for
news apps until it gets a bit better at this AI thing," Fowler added.
The constant blunders of Apple's AI summaries put the tech's nagging
shortcomings on full display, demonstrating that even tech giants like
Apple are failing miserably to successfully integrate AI without
constantly embarrassing themselves.
AI models are still coming up with all sorts of "hallucinated" lies, a
problem experts believe could be intrinsic to the tech. After all,
large language models like the one powering Apple's summarizing feature
simply predict the next word based on probability and are incapable of
actually understanding the content they're paraphrasing, at least for
the time being.
And the stakes are high, given the context. Apple's notifications are
intended to alert iPhone users to breaking news — not sow distrust and
confusion.
The story also highlights a stark power imbalance, with news
organizations powerless to determine how Apple represents their work to
its vast number of users.
"News organizations have vigorously complained to Apple about this, but
we have no power over what iOS does to the accurate and expertly
crafted alerts we send out," Fowler wrote in a followup.
In December, the BBC first filed a complaint with Apple after the
feature mistakenly claimed that Luigi Mangione, the man who killed
UnitedHealthcare CEO Brian Thompson, had shot himself — an egregious
and easily disproven fabrication.
Last week, Apple finally caved and responded to the complaint, vowing
to add a clarifying disclaimer that the summaries were AI-generated
while also attempting to distance itself from bearing any
responsibility.
"Apple Intelligence features are in beta and we are continuously making
improvements with the help of user feedback," a company spokesperson
told the BBC in a statement. "A software update in the coming weeks
will further clarify when the text being displayed is summarization
provided by Apple Intelligence."
"We encourage users to report a concern if they view an unexpected
notification summary," the company continued.
The disclaimer unintentionally points to the dubious value proposition
of today's AI: what's the point of a summarizing feature if the company
is forced to include a disclaimer on each one that it might be entirely
wrong? Should Apple's customers really be the ones responsible for
pointing out each time its AI summaries are spreading lies?
"It just transfers the responsibility to users, who — in an already
confusing information landscape — will be expected to check if
information is true or not," Reporters Without Borders technology and
journalism desk head Vincent Berthier told the BBC.
Journalists are particularly worried about further eroding trust in the
news industry, a pertinent topic given the tidal wave of AI slop that
has been crashing over the internet.
"At a time where access to accurate reporting has never been more
important, the public must not be placed in a position of
second-guessing the accuracy of news they receive," the National Union
of Journalists general secretary Laura Davison told the BBC.
https://futurism.com/apple-ai-butchering-news-summaries
Maybe so...but it's for the children and the environment :-)
According to a recent report, all the servers, cooling, etc. needed for
this silly AI fad will be worse for the environment than all of the cars
driving around in California.  :-\
You sound like every fuddy-duddy ever when confronted with new technology.

AI is NOT a fad.

It is obviously in its infancy and not very good for very much yet.

But is also obvious that it is going to get better.

Loading...