Comments on: Move Over ChatGPT, Meta Platforms LLaMA Makes Some Drama https://www.nextplatform.com/2023/02/28/move-over-chatgpt-meta-platforms-llama-makes-some-drama/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Mon, 17 Jul 2023 15:48:29 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Hubert https://www.nextplatform.com/2023/02/28/move-over-chatgpt-meta-platforms-llama-makes-some-drama/#comment-205434 Thu, 02 Mar 2023 22:40:48 +0000 https://www.nextplatform.com/?p=141964#comment-205434 In a recent blog entry, Stephen Wolfram suggests training chatGPT (and possibly LLaMA now, as well) to speak the Alpha/Mathematica language, so that such Large Language Models can produce better, more precise and correct, quantitative answers (to user queries), while retaining the human-like quality of their outputs (which is their strength). Alpha’s already able to solve quantitative problems expressed in rather ordinary language, for example (as illustrated in his blog): “How many calories are there in a cubic light year of ice cream?” — and I don’t quite see how a LLM could improve on this, except to fluff-up the answer some. However, in both Alpha and Maple (symbolic computation systems), it is notable that the formulas obtained as solutions of even relatively simple calculus and differential equation queries, can, while mathematically correct, be quite involved, complex, and non-simplified, unlike the much more concise solutions produced for the same problems by humans (Carslaw and Jaeger, Lapidus and Amundsen, Gelhar, Goldstein, Van Genuchten, etc…). This, I think, is where ANN-oriented AI, of the kind found in LLMs, could be most useful for these symbolic math softwares, to help them hone in on a mathematical form of the eventual solution that they present, that is suitable and meaningful for the human end-user of the system (concise and correct) — if this is something that LLMs can do … (or maybe it’s just a job for classical tree-search with heuristics?).

]]>
By: HuMo https://www.nextplatform.com/2023/02/28/move-over-chatgpt-meta-platforms-llama-makes-some-drama/#comment-205412 Wed, 01 Mar 2023 21:26:55 +0000 https://www.nextplatform.com/?p=141964#comment-205412 Great piece! Crispy analysis on the outside, juicy details on the inside, and a sprinkling of silly bits for extra flavor … LLaMAzing! The French Hexagon should be sure to celebrate the 40th anniversary of LeCun’s 1985 root-of-all-evil paper, where he mis-spelled “assymetric”, and consequently, inspired by Kohonen’s correlation matrix memories, single-handedly unleashed the multilayer horror backpropagation demons of modern AI/ML on an unsuspecting world. This celebration should be a traditional pagan and carnivalesque ritual (eg. “salsa du demon”), much like next Tuesday’s (March 7) shutdown of the whole country by labor unions, in protest of the government’s proposed retirement reform (43 years of hard labor needed for benefits in France, versus just 20 in Maryland!). But, on topic, fewer parameters should help prevent overfitting and stiffening of the supermodel, to result in more creative LLaMucinations, as demonstrated also (I think) by the inspirational works of Jack Kerouac and Bob Marley. This erstwhile unused linguistic suplex could prove invaluable in the impending Battle Royale with Megatron, The ChinChilla, BERT, and the Le Chat J’ai pété, for word domination, at the sentence and paragraph level! To LLaMAO, or not to LLaMAO, … that is the question!

]]>