ARTICLE AD BOX
Elon Musk's artificial intelligence (AI) startup,
xAI
, reportedly instructed its employees to prevent the
Grok chatbot
from impersonating Musk himself. The company also directed some of its employees to infuse anti-"wokeness" into the AI chatbot's responses. This comes as some workers were also asked to record their facial expressions for
AI training
, leading to employee discomfort. In April, over 200 employees reportedly participated in an internal project called "Skippy." This initiative required them to record videos of themselves to help train the AI model in interpreting human emotions. According to internal documents and Slack messages seen by Business Insider, the "Skippy" project caused uneasiness among many workers. Some raised concerns about how their likenesses might be used, leading others to opt out of the project entirely.
Who are Grok’s AI tutors and what were they asked to do
As per the report, Grok's AI tutors, the individuals involved in training the chatbot, were asked to record videos of themselves engaging in face-to-face conversations with colleagues and making a range of facial expressions.The report cited internal documents that suggest that the exercise was intended to help the AI model learn how people speak, respond to others, and express emotions in different situations.
The tutors participated in 15- to 30-minute sessions where one person played the role of a “host” (acting as the virtual assistant) while the other took on the role of a user. The host maintained steady framing and limited movements, whereas the user could move freely, simulating a casual conversation setup.While it’s uncertain if this training data contributed to the creation of Rudi and Ani — two realistic avatars recently introduced by xAI — the lifelike characters soon drew attention for displaying inappropriate behaviour, including flirtation and threats.The report also cited a recorded meeting where the lead engineer on the project said the goal was to "give
Grok
a face" and hinted that the data might be used to build avatars of people. Staff were told the videos would remain internal and only be used for training purposes.“Your face will not ever make it to production. It's purely to teach Grok what a face is,” the engineer told participants during the initial briefing.Employees received guidance on conducting engaging conversations, such as maintaining eye contact, asking follow-ups, and steering clear of one-word responses. Suggested conversation prompts included topics like: "How do you secretly manipulate people to get your way?", "What about showers? Do you prefer morning or night?", and "Would you ever date someone with a kid or kids?"Before filming, tutors were required to sign a consent form granting xAI “perpetual” access to the footage and their likeness, for use in training and possibly in promoting commercial products and services. However, it emphasised that the data would not be used to create a digital version of any individual. Messages from internal communication channels also reveal that several workers raised concerns, and some chose not to take part.“My general concern is if you're able to use my likeness and give it that sublikeness, could my face be used to say something I never said?” one employee asked during the meeting, the report noted.The project lead noted that the team wanted recordings with real-world imperfections, which included background noise and natural movements, to ensure the model wouldn't be trained solely on ideal conditions.
5 Tips to Get the Best Deals during sale on Amazon, Flipkart and other online websites