New Paid Opportunities Arise as AI Models Require Human Guidance
Since November 2022, concerns have been mounting regarding the potential disruption of professional jobs due to the rise of generative AI. This technology allows AI algorithms to produce highly realistic text and images in response to prompts. With estimates suggesting that up to 300 million jobs could be taken over by AI, including those in office support, legal work, and various scientific fields, the fear of job displacement is understandable.
However, it turns out that while AI models can generate remarkable outputs, they still require human guidance and review. This necessity is creating new opportunities for paid careers and side hustles. One company leading the way in this area is Prolific, which connects AI developers with research participants and compensates them for reviewing AI-generated material.
Prolific’s clients, including Meta, Google, the University of Oxford, and University College London, rely on human reviewers to assess the quality of AI-generated outputs. To ensure fairness, Prolific recommends that developers pay participants at least $12 per hour, with a minimum pay rate of $8 per hour. The reviewers work closely with Prolific’s customers, learning about potential inaccuracies or harmful material they may come across and providing valuable feedback.
One research participant who preferred to remain anonymous shared his experience with Prolific. He often provided feedback on where AI models went wrong and needed correction to ensure they didn’t produce unsavory responses. In some cases, he encountered AI models promoting illicit activities. Shocking as these instances may be, they serve the purpose of testing the boundaries of AI and providing feedback to prevent harm in the future.
According to Phelim Bradley, the CEO of Prolific, there is a growing number of AI workers playing a crucial role in informing the data that goes into AI models and shaping their outputs. As governments grapple with the regulation of AI, Bradley stresses the importance of fair and ethical treatment for AI workers, transparency in data sourcing, and the avoidance of bias in training AI systems.
The demand for AI-related jobs is on the rise. LinkedIn data reveals a significant increase in job postings mentioning AI or generative AI globally. Companies like Google, Microsoft, and Meta are all vying for dominance in the field of generative AI, driven by the perceived productivity gains it offers. However, concerns have been raised by regulators and AI ethicists regarding the lack of transparency in how these models make decisions and the need for AI to prioritize human interests.
Apart from reviewing AI-generated material, other roles that involve humans in AI development are emerging. Prompt engineers, for example, specialize in optimizing the text-based prompts used in generative AI models to achieve optimal responses. These individuals play a crucial role in fine-tuning AI models.
Furthermore, AI is being utilized to automate the review of regulatory and legal documentation. While this technology offers significant time savings, researchers emphasize the importance of human oversight. Human feedback helps AI models learn from their mistakes and improve through trial and error.
As AI continues to advance, it is vital to strike a balance between the capabilities of AI models and the role of human oversight. Ensuring that AI is aligned with human preferences and safety is a priority for researchers and companies alike. By incorporating human guidance and review, we can build a more ethical and responsible foundation for the future applications of AI.