Comments on: Cerebras Smashes AI Wide Open, Countering Hypocrites https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Wed, 05 Apr 2023 18:51:31 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Alastair https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206824 Wed, 05 Apr 2023 06:50:34 +0000 https://www.nextplatform.com/?p=142156#comment-206824 In reply to FeepingCreature.

Exactly. Giving a helping hand to every bad actor, rogue state or hostile superpower is reckless and naive. Stupid thing to have done

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206622 Sat, 01 Apr 2023 03:37:09 +0000 https://www.nextplatform.com/?p=142156#comment-206622 In reply to HuMo.

Wordpockyclypse

]]>
By: HuMo https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206617 Sat, 01 Apr 2023 00:18:40 +0000 https://www.nextplatform.com/?p=142156#comment-206617 In reply to Timothy Prickett Morgan.

Wordpocalypse?

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206573 Fri, 31 Mar 2023 11:52:27 +0000 https://www.nextplatform.com/?p=142156#comment-206573 In reply to FeepingCreature.

Oh good. Someone did the subtext. Thank you.

https://www.itjungle.com/2023/01/23/it-is-time-to-have-a-group-chat-about-ai/

]]>
By: FeepingCreature https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206571 Fri, 31 Mar 2023 11:37:39 +0000 https://www.nextplatform.com/?p=142156#comment-206571 > And for those who are worried about how AI might be used or misused, my view is that one of the great tools we have to combat malfeasance is sunshine. You put things out in the open where everybody can see it. And then you can identify when and how a technology is being used badly. Keeping things in the hands of a few doesn’t strike me exactly as the way to deal with new, complicated technology.

And that’s why the United States Government should immediately publish how to build hydrogen bombs.

You’ll be able to identify when the technology is being used badly from the handy mushroom clouds.

Keeping dangerous things in the hands of a few isn’t great, I’d rather have them in the hands of nobody, but it still beats having them in the hands of everyone with a GPU.

]]>
By: Hubert https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206554 Fri, 31 Mar 2023 00:04:05 +0000 https://www.nextplatform.com/?p=142156#comment-206554 In reply to UK.

That’s quite an interesting video from Regent Law School, reminding us all that the “goals” (if any) of our interlocutors should be correctly apprehended for successful communicative exchanges. I expect that interactive LLM software has the overall goal of providing an environment of interactions with human users, that makes the machine’s textual responses to user queries, appear human as well. Part of this would be NLP parsing of the query, followed by response production with corresponding domain-specific vocabulary, syntax, and grammar. In the absence of a model of cognition (or domain-specific sub-models thereof), the LLMs can only guarantee that their outputs are syntactically similar to what a human would produce, within the query’s subject area (domain), but without much guarantee of accuracy beyond accidental correctness. This is where I think that a sub-goal of persuasiveness may intervene in current LLM efforts (either explicitly programmed-in, or resulting “inadvertently” from particular aspects of the training dataset), that directs responses towards a written style of apparent authoritativeness (sophist rethoric), that effectively hides the lack of a cognitive backend, making the system’s outputs appear more credible to users than they actually are. Humans do the same thing all the time (eh-eh-eh!).

]]>
By: Tom Miller https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206526 Thu, 30 Mar 2023 15:25:03 +0000 https://www.nextplatform.com/?p=142156#comment-206526 Right On, Cerebas.
This gives less deep pocketed researcher and open software developer’s a handup!

Tom M

]]>
By: UK https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206511 Thu, 30 Mar 2023 10:57:10 +0000 https://www.nextplatform.com/?p=142156#comment-206511 In reply to Hubert.

“Don’t talk to an AI”, because it can, well, will be used against you – reminded me instantly on this https://www.youtube.com/watch?v=d-7o9xYp7eE “Don’t talk to the police”

]]>
By: Hubert https://www.nextplatform.com/2023/03/29/cerebras-smashes-ai-wide-open-countering-hypocrites/#comment-206479 Wed, 29 Mar 2023 20:34:33 +0000 https://www.nextplatform.com/?p=142156#comment-206479 Good going Cerebras … and a great way to celebrate the 80th annniversary of Saint-Exupéry’s “Le Petit Prince” (pub. Apr. 6, 1943) about open-mindedness and wonderment! This openness will hopefully help to rectify current metalinguistics of human-machine interactions whereby LLMs seem to engage in goal-directed manifestations of communicative co-existence, aimed at persuasion (including via alternative-reality constructions, interpreted as factual), rather than casual or informative “interpersonal” exchange. In the meantime, human users should probably expect to be seduced by the algorithms, as needed to attain the targeted goal.

]]>