AS AN ACTIVE user of generative AI tools, I have always been curious about the foundational building of Large Language Models (LLMs). My curiosity was allayed when my sister recommended that I dabble in some interesting on-demand AI projects. Engaging with these projects beyond merely consuming AI provided me clarity on how the AI training industry is relentlessly growing. Pioneering the industry is Scale AI, founded by Alexandr Wang, who was appointed by Meta in June with a USD14bil offer.
AI training aims at feeding, tuning and perfecting the content in LLMs. Behind facilitating “human thinking” is a great deal of labour-intensive work by many AI trainers. Every piece of information in LLMs has to be created and trained by a subject knowledge expert, down to every minutiae, in diverse languages and across various regions to provide high-quality training data to AI companies.