‘Culture and truth is dictated by an AI cartel’: how a tiny startup wants to change AI inference forever by allowing ANYONE to participate

0
5


  • Exo supports LLaMA, Mistral, LlaVA, Qwen, and DeepSeek
  • Can run on Linux, macOS, Android, and iOS, but not Windows
  • AI models needing 16GB RAM can run on two 8GB laptops

Running large language models (LLMs) typically requires expensive, high-performance hardware with substantial memory and GPU power. However, Exo software now looks to offer an alternative by enabling distributed artificial intelligence (AI) inference across a network of devices.

The company allows users to combine the computing power of multiple computers, smartphones, and even single-board computers (SBCs) like Raspberry Pis to run models that would otherwise be inaccessible.

LEAVE A REPLY

Please enter your comment!
Please enter your name here