Comments on: The Crazy Eights Of Large Language Models https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Mon, 25 Sep 2023 15:15:40 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Xavier O'Neill https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-211142 Fri, 14 Jul 2023 21:58:55 +0000 https://www.nextplatform.com/?p=142223#comment-211142 AI is like an 8 year old that’s read the entire Internet. AI is to Software, what Quantum computing is to Hardware. Once quantum computing is comoditized the real fun begins.

]]>
By: Mark Funk https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-209905 Mon, 12 Jun 2023 18:12:44 +0000 https://www.nextplatform.com/?p=142223#comment-209905 In reply to Timothy Prickett Morgan.

Just found a quick synopsis which gets the essence across: The Evitable Conflict, in Wikipedia, https://en.wikipedia.org/wiki/The_Evitable_Conflict

The question to ask is, what if it did not have even that control?

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-209795 Sat, 10 Jun 2023 03:24:33 +0000 https://www.nextplatform.com/?p=142223#comment-209795 In reply to Mark Funk.

Hiya Mark! I did read them a long, long time ago. But perhaps it is time to review them…

]]>
By: Mark Funk https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-209783 Fri, 09 Jun 2023 17:10:19 +0000 https://www.nextplatform.com/?p=142223#comment-209783 In reply to Timothy Prickett Morgan.

Timothy – Had you ever read Isaac Asimov’s “I, Robot” short stories? Those three laws were a form of intent moderation. Perhaps, for our situation, start with the last.

]]>
By: HuMo https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-207195 Thu, 13 Apr 2023 18:49:16 +0000 https://www.nextplatform.com/?p=142223#comment-207195 In reply to Hubert.

Hi Hu! Tell your friend 8^b (unusual name!) to stop procastinating and immediately watch the in-depth documentary by top-notch boffins (already 23-years old) where they successfully combined genomics and AI, outside of the lab, to protect humanity from nearly all future extinctions: “The 6th Day” I think. It candidly details how test subject Adam (oddly the same name as the ANN training algo.) had his genome and NN weights downloaded to a 9-bit tape syncord, and then reflashed onto a gooey blank for an essentially infinite lifespan. 8^b can find more details on the back of his/her eyelids! Too bad that it doesn’t yet work here in France, as LLMs haven’t been trained on French text so far (nor Spanish, Chinese, Swahili, …) and might translate “the spirit is strong but the flesh is weak” into “the vodka is great but the meat is undercooked” (an old classic I think). Then again, noone knows how far the French Government would raise retirement age and hard labor requirements with such longer life sentence (word!). A great documentary nevertheless…

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-207185 Thu, 13 Apr 2023 12:06:27 +0000 https://www.nextplatform.com/?p=142223#comment-207185 In reply to Andrew.

True. Anything that created sex is alright by me.

]]>
By: Andrew https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-207163 Thu, 13 Apr 2023 00:41:52 +0000 https://www.nextplatform.com/?p=142223#comment-207163 > The Top Five Extinction Level Events On Earth So Far

There are two _far_ greater extinction events missed in your list:
1. The Great Oxygenation Event (https://en.wikipedia.org/wiki/Great_Oxidation_Event) from about 2.4-2.0 billion years ago. This triggered the genesis of sexual reproduction and the eukaryotes – i.e. the requirements for multi-cellular life
2. The Cryogenian or ‘Snowball Earth’ (https://en.wikipedia.org/wiki/Cryogenian), from 720-635 million years ago. This triggered the ‘Ediacaran biota’, life’s first experimentation with larger multi-cellular plants and animals and set the scene for the later Cambrian Explosion.

]]>
By: Hubert https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-207134 Wed, 12 Apr 2023 01:04:58 +0000 https://www.nextplatform.com/?p=142223#comment-207134 In reply to Timothy Prickett Morgan.

Exactly! Different parts of the article probably resonate differently with different folks, which enhances the robustness of the species (as a whole) — and Bowman is not entirely consistent between the front- and back-end of his exposition in the current manuscript (in my oponion ;^} ). The Human Genome Project (public) and parallel work of Celera Genomics (private) ended 20 years ago, opening the door to human gene editing, for which ethics-oriented regulation had to be developed to prevent misuse of the newly developed data and tech. Is this where we are today in LLMs/AI/ML (asking for a friend 8^b)?

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-207130 Tue, 11 Apr 2023 21:50:17 +0000 https://www.nextplatform.com/?p=142223#comment-207130 In reply to Hubert.

Strange coincidence, I saw that, too. You can only put so many asides into a single thought. I think. Maybe. Maybe not.

]]>
By: Hubert https://www.nextplatform.com/2023/04/10/the-crazy-eights-of-large-language-models/#comment-207126 Tue, 11 Apr 2023 19:43:45 +0000 https://www.nextplatform.com/?p=142223#comment-207126 OSHA should definitely investigate the work environment for human testers at AI/ML-oriented shops (both in industry and academia), particularly for conditions potentially hazardous to mental health, and mandate that periodic evaluations be performed, and that proper first-aid training and support services be implemented. Section 8-ing injured employees (eg. Blake Lemoine) is just not acceptable, and LLM software (eg. LaMDA) has certainly demonstrated its ability to cause possibly permanent disabilities to its human users. The software should not be released without unambiguous warning labels listing potential side-effects, and stating intended use as “entertainment”, or “recreational” (do not inhale).

This being said, the paper by Dr. (not-Dave) Bowman is interesting for bringing a varied perspective to the topic of LLMs, with more than 7 pages of references. The “Thing 1”, based on Wei et al. (2022), shows interesting emergent behavior when training reaches FLOPs of the order of Avogadro’s constant (6.022×10^23) whereby those larger LLMs can then do some 3-digit algebra, while smaller ones can’t (Bowman’s Fig. 1, Wei’s Fig. 2). The appearance of Avogadro’s constant (order of magnitude) suggests that we should prepare to celebrate the 100th anniversary of Jean Perrin’s 1926 Nobel Prize in Physics, as he is credited for naming it so. However, in section 9.5, Bowman notes that the larger LLMs still fail at such simple reasoning tasks as negation and Modus Tollens (Huang and Wurgaft, in McKenzie et al. 2022, show that this gets worse as LLMs get larger). This follows issues noted earlier in section 8 where queries requesting a “step-by-step” answer could help the LLM produce correct quantitative answers (eg. Kojima et al., 2022) but these would actually correspond to memorization of “specific examples or strategies for solving tasks from their training data without internalizing the reasoning process that would allow them to do those tasks robustly”.

In other words, while the cat has left the hat, blowing our minds clean-off in the process, the jury is still out deliberating whether it is alive, or just feels lucky!

]]>