Message boards :
Number crunching :
Axiom Distributed AI - WU analyis
Message board moderation
| Author | Message |
|---|---|
marmot Send message Joined: 13 Dec 15 Posts: 196 Credit: 2,391,570 RAC: 12 |
I linked this infographic https://ibb.co/XxXTYH7m to my analysis thread over at the forums. Forum discussions with new projects by younger generations are rare. Their Discord is where influence of the WU's is going to actually happen? Here's the thread link. https://ibb.co/XxXTYH7m If I missed something;let me know. I actually desire having a chance to influence the training and created some .txt files. My one server build was to train my own test neural network but life got in the way.... This AI emergnce is being done in a "move fast; break things" attitude and humans are at an inflection point; now. Not in 10 years. Now... |
Send message Joined: 22 Aug 16 Posts: 524 Credit: 2,513,421 RAC: 638 |
AI data centers use the best hardware to do the training. This project is a distributing that work to users. But we'll never be able to do it as efficiently as a data center as 480volt servers and 5090 GPUs. AI servers are already raising the cost of every day electricity enough and the research from this will use our less efficient processors. I don't like most AI as is and while I don't AI data centers collecting all my data, I definitely don't want to use my own less efficient processing power to do it. This is not a project I wish to support at a fundamental level. The lack of a response from the admin doubles down on that. It's so memory intensive as it's AI and Python doesn't help The users are supposed to use the alternative client as the standard BOINC client doesn't do what the admin wants and we're supposed to feed it files. PyHelix — 1:30 PM You can put any files in there. documents, images, code, PDFs. Maybe even real-time data if you want to creative with that. It was asked to describe the project in 3 sentences: PyHelix — 1:49 PM If I don't go into the fact that it's a 17.8 billion param model. It would be: Axiom trains to recognize patterns in data using your computer's spare processing power. Rather than using massive data centers for training this AI, we distribute that work across many volunteers. If you model learns better, you can watch this progress on the Axiom website. PyHelix — 2:01 PM So really we we are decentralizing AI training . It's not an LLM, it does pattern recognition, and if it works model is opensource. PyHelix — 2:15 PM It's research. Distributed training is the the goal. Not every project needs a product at the end. |
marmot Send message Joined: 13 Dec 15 Posts: 196 Credit: 2,391,570 RAC: 12 |
All great points, especially about energy efficiency. Even with less efficient machines; I believe in the upcoming AI world, open source, citizen trained models will be critical to maintaining a free society. AI being only in the hands of particularly powerful trillion dollar corporations, not even in government run and maintained hands, undermines all current government structures. Turbulent times ahead. Sparsely connected neural networks can improve efficiency 90%+ https://arxiv.org/pdf/1907.04840. DeepSeek, China's entry was done for like 1% the training costs on earlier gen NVidia (export restrictions). If Axiom's philosophy is opensource, and not a an 'opensource' scam perpetrated by Sam Alton (read about his history), then I'd be very supportive and grow my hardware towards their project. But, yeah, they have been standoffish and breaking our standard WU coding norms in mulltiple ways (their desire to 'optimize' the hardware for their WU). The latest break for me was their GPU WU refuses to share the NVida card even with proper 0.16% GPU usage settings that works with any other project. Minecraft and Einstein WU's sitting waiting on Axiom AI GPU hogging the entire card. |
Send message Joined: 22 Aug 16 Posts: 524 Credit: 2,513,421 RAC: 638 |
0.16 in an app_config? An app_config does not correlate to a GPU utilization %, only in the number of tasks assigned to a GPU. |
Send message Joined: 20 Jun 12 Posts: 171 Credit: 379,710 RAC: 66 |
The latest break for me was their GPU WU refuses to share the NVida card even with proper 0.16% GPU usage settings that works with any other project.A WU using the entire card is actually the optimal case, running more than one WU per GPU is just a workaround in case one WU can't use the entire GPU. I'm also not sure on which of your GPUs 6 concurrent WUs might make sense, you have one pretty slow iGPU and two older cards with 3GB VRAM.
|
©2026 Sébastien