Meta disbands Responsible AI team

Meta has announced the redistribution of its Responsible AI team members among various company divisions, as part of an initiative to “enhance collaboration and proximity to core product development.” 

As reported in The Information, the decision aims to fortify efforts in preventing potential harms associated with AI, according to an email from a Meta spokesperson.

The majority of the Responsible AI team is slated to transition to generative AI, where their focus will persist on supporting overarching Meta initiatives related to responsible AI development and usage. 

Additionally, some team members will be integrated into AI infrastructure roles. The reshuffling is designed to bring the staff closer to the heart of product and technology development within the company.

According to the statement, these adjustments align with Meta’s commitment to prioritising and investing in the safe and responsible development of AI. The spokesperson emphasised that the changes are intended to enhance scalability to meet future needs.

Rollout of generative AI tools
In October, Meta, the parent company of Facebook, began the rollout of generative AI tools. These tools are capable of generating content like image backgrounds and written text variations for advertisers.

Meta’s portfolio of AI products encompasses its language model Llama 2 and a Meta AI chatbot, capable of producing text responses and lifelike images.

Computing says:
This is the first step towards a shift-left mindset that we’ve seen in AI. In DevOps that means moving security earlier in the project timeline, rather than an afterthought bolted on at the end; the same applies for AI, but replace AI with responsibility, ethics and transparency.

While the news sounds suspicious at first, it actually looks like a sensible move from Meta. Just as shift left evolved into DevSecOps, maybe this signals the rise of AITrustOps?

{Categories} _Category: Takes,*ALL*{/Categories}

Exit mobile version