Axiom Distributed AI - WU analyis

Message boards : Number crunching : Axiom Distributed AI - WU analyis
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile marmot
     
Avatar

Send message
Joined: 13 Dec 15
Posts: 196
Credit: 2,391,570
RAC: 12
Message 12097 - Posted: 2 Feb 2026, 22:54:53 UTC

I linked this infographic https://ibb.co/XxXTYH7m to my analysis thread over at the forums. Forum discussions with new projects by younger generations are rare. Their Discord is where influence of the WU's is going to actually happen?

Here's the thread link. https://ibb.co/XxXTYH7m

If I missed something;let me know. I actually desire having a chance to influence the training and created some .txt files. My one server build was to train my own test neural network but life got in the way....

This AI emergnce is being done in a "move fast; break things" attitude and humans are at an inflection point; now. Not in 10 years. Now...
ID: 12097 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin
       

Send message
Joined: 22 Aug 16
Posts: 524
Credit: 2,513,337
RAC: 636
Message 12100 - Posted: 3 Feb 2026, 21:35:10 UTC

AI data centers use the best hardware to do the training. This project is a distributing that work to users. But we'll never be able to do it as efficiently as a data center as 480volt servers and 5090 GPUs. AI servers are already raising the cost of every day electricity enough and the research from this will use our less efficient processors. I don't like most AI as is and while I don't AI data centers collecting all my data, I definitely don't want to use my own less efficient processing power to do it. This is not a project I wish to support at a fundamental level. The lack of a response from the admin doubles down on that. It's so memory intensive as it's AI and Python doesn't help

The users are supposed to use the alternative client as the standard BOINC client doesn't do what the admin wants and we're supposed to feed it files.

PyHelix
— 1:30 PM
You can put any files in there. documents, images, code, PDFs. Maybe even real-time data if you want to creative with that.

It was asked to describe the project in 3 sentences:
PyHelix
— 1:49 PM
If I don't go into the fact that it's a 17.8 billion param model. It would be:
Axiom trains to recognize patterns in data using your computer's spare processing power. Rather than using massive data centers for training this AI, we distribute that work across many volunteers. If you model learns better, you can watch this progress on the Axiom website.

PyHelix
— 2:01 PM
So really we we are decentralizing AI training . It's not an LLM, it does pattern recognition, and if it works model is opensource.

PyHelix
— 2:15 PM
It's research. Distributed training is the the goal. Not every project needs a product at the end.
ID: 12100 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile marmot
     
Avatar

Send message
Joined: 13 Dec 15
Posts: 196
Credit: 2,391,570
RAC: 12
Message 12135 - Posted: 25 Feb 2026, 18:23:58 UTC - in response to Message 12100.  
Last modified: 25 Feb 2026, 18:26:37 UTC

All great points, especially about energy efficiency.

Even with less efficient machines; I believe in the upcoming AI world, open source, citizen trained models will be critical to maintaining a free society. AI being only in the hands of particularly powerful trillion dollar corporations, not even in government run and maintained hands, undermines all current government structures.

Turbulent times ahead.

Sparsely connected neural networks can improve efficiency 90%+ https://arxiv.org/pdf/1907.04840.

DeepSeek, China's entry was done for like 1% the training costs on earlier gen NVidia (export restrictions).

If Axiom's philosophy is opensource, and not a an 'opensource' scam perpetrated by Sam Alton (read about his history), then I'd be very supportive and grow my hardware towards their project.

But, yeah, they have been standoffish and breaking our standard WU coding norms in mulltiple ways (their desire to 'optimize' the hardware for their WU).

The latest break for me was their GPU WU refuses to share the NVida card even with proper 0.16% GPU usage settings that works with any other project.
Minecraft and Einstein WU's sitting waiting on Axiom AI GPU hogging the entire card.
ID: 12135 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin
       

Send message
Joined: 22 Aug 16
Posts: 524
Credit: 2,513,337
RAC: 636
Message 12139 - Posted: 25 Feb 2026, 21:43:56 UTC

0.16 in an app_config?

An app_config does not correlate to a GPU utilization %, only in the number of tasks assigned to a GPU.
ID: 12139 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Link
 
Avatar

Send message
Joined: 20 Jun 12
Posts: 171
Credit: 379,698
RAC: 65
Message 12140 - Posted: 26 Feb 2026, 18:50:23 UTC - in response to Message 12135.  

The latest break for me was their GPU WU refuses to share the NVida card even with proper 0.16% GPU usage settings that works with any other project.
Minecraft and Einstein WU's sitting waiting on Axiom AI GPU hogging the entire card.
A WU using the entire card is actually the optimal case, running more than one WU per GPU is just a workaround in case one WU can't use the entire GPU. I'm also not sure on which of your GPUs 6 concurrent WUs might make sense, you have one pretty slow iGPU and two older cards with 3GB VRAM.
ID: 12140 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : Axiom Distributed AI - WU analyis

©2026 Sébastien