Wednesday, May 29, 2024

Meta releases ‘Personal Boundary’ feature to combat creepy VR behaviour

Meta has announced a new feature to allow more personal space for people’s avatars in virtual-reality worlds.

The metaverse is still at concept stage, but the latest attempts to create virtual worlds are already facing an age-old problem: harassment.

Bloomberg’s technology columnist Parmy Olson told the BBC’s Tech Tent program about her “creepy” experiences.

And one woman likened her own traumatic experience in VR to sexual abuse.

Meta has now announced a new feature, Personal Boundary, which begins rolling out today. It prevents avatars from coming within a set distance of each other, creating more personal space for people and making it easier to avoid these unwanted interactions.

It stops others “invading your avatar’s personal space,” said Meta. “If someone tries to enter your Personal Boundary, the system will halt their forward movement as they reach the boundary.”

It is being made available in Meta’s Horizon Worlds and Horizon Venues software. The firm said it was a “powerful example of how VR has the potential to help people interact comfortably,” but acknowledged there was more work to be done.

For some, the news will be welcome. “I did have some moments when it was awkward for me as a woman,” Olson said.

She was visiting Meta’s Horizon Worlds, its virtual-reality platform where anyone 18 or older can create an avatar and hang out.

To do so, users need one of Meta’s VR headsets, and the space offers the chance to play games and chat to other avatars, none of whom has legs.

“I could see straight away I was the only woman, the only female avatar. And I had these men kind of come around me and stare at me silently,” Olson said.

“Then they started taking pictures of me and giving the pictures to me, and I had a moment when a guy zoomed up to me and said something. And in virtual reality, if someone is close to you, then the voice sounds like someone is literally talking into your ear. And it took me aback.”

She experienced similar discomfort in Microsoft’s social VR platform.

“I was talking to another lady, and within minutes of us chatting a guy came up and started chatting to us and following us around saying inappropriate things, and we had to block him,” she said. “I have since heard of other women who have had similar experiences.”

She said while she wouldn’t describe it as harassment, it was “creepy and awkward.”

Nina Jane Patel went a lot further this week when she told the Daily Mail that she was abused in Horizon Venues, likening it to sexual assault. She described how a group of male avatars “groped her” and subjected her to a stream of sexual innuendo. They photographed her and sent a message reading: “Don’t pretend you didn’t love it.”

Meta responded to the paper saying that it was sorry. “We want everyone to have a positive experience, and easily find the safety tools that can help in a situation like this – and help us investigate and take action.”

Moderating content in the nascent metaverse is going to be challenging, and Meta chief technology officer Andrew Bosworth admitted that it would offer both “greater opportunities and greater threats.”

“It could feel a lot more real to me, if you were being abusive towards me because it feels a lot more like physical space,” he said in an interview with the BBC late last year.

But he said people in virtual roles would have “a great deal more power” over their environments. “If I were to mute you, you would cease to exist for me, and your ability to do harm to me is immediately nullified.”

And he questioned whether people would want the kind of moderation that exists on platforms such as Facebook when having chats in virtual reality.

“Do you really want the system or a person standing by listening in? Probably not. So I think we have a privacy trade-off – if you want to have a high degree of content, safety, or what we would call integrity, well that trades off against privacy.”

And in Meta’s vision of the metaverse, where different rooms are run by different companies, the trade-off gets even more complex as people move out of the Meta-controlled virtual world into others.

“I can give no guarantees about either the privacy, nor the integrity of that conversation,” he said.

Olson agreed that it was going to be “a very difficult thing for Facebook, Microsoft, and others to take care of.”

“When you are scanning text for hate speech, it’s hard but doable – you can use machine-learning algorithms. To process visual information about an avatar or how close one is to another, that is going to be so expensive computationally, that is going to take up so much computer power, I don’t know what technology can do that.”

Meta is investing $10 billion in its metaverse plans, part of which will need to go toward building new ways of moderating content.

“We have learned a tremendous amount in the last 15 years of online discourse… so we’re going to bring all that knowledge with us to do the best that we can to build these things from the ground up, to give people a lot of control over their own experience,” Bosworth said.

https://www.bbc.com/news/technology-60247542

 

BIG Media
BIG Media
Our focus is on facts, accurate data, and logical interpretation. Our only agenda is the truth.
spot_img

BIG Wrap

U.S. Gaza pier knocked out of action by heavy seas

(BBC News) A temporary pier built by the US military to deliver aid to Gaza has been damaged by heavy seas and will take...

Civilian deaths in Rafah strike a tragic mishap, Netanyahu says

(BBC News) Israeli Prime Minister Benjamin Netanyahu says the strike that killed scores of displaced Palestinians in Rafah on Sunday was a "tragic mishap", amid...